In this short tutorial we’ll show you how you can integrate camera functionality in your weave.ly applications.
Don’t feel like reading? Don’t worry we got you covered. Sit back, relax and let us run you through it in this video!
The figure below showcases the design we’ll be using as an example to explain how to interact with camera blocks. The example sports two screens. On the leftmost screen the user will see the camera (in the grey rectangle) and be able to take a picture by pressing the black button. On the rightmost screen the user will see the picture they’ve taken.
Tagging the Camera using the Plugin
The first step in using the camera in your weave.ly applications is to tag the part of your design in which the camera should render. Any rectangle can be used to show the camera feed, weave.ly will automatically make sure that the feed respects the rectangle’s dimensions. The figure below showcases this for our example.
Moreover, for the sake of this example we’ve tagged the rectangle in the second screen with the “Image” functionality, as it will be used to showcase the freshly taken picture from our user.
Configuring the Camera in the Weave.ly Editor
Your camera functionality will not automatically work when you tag a rectangle in the plugin. You will need to configure your block in the weave.ly editor. More specifically, a camera block provides three input triggers (as shown below):
- One to specify when the camera feed should start (top left option port)
- One to specify when the camera feed should be stopped (top right option port)
- One to specify when a picture should be taken (left input port)
Whenever a picture is taken, the block outputs an image from its output port. You can configure which camera to use on your user’s device (i.e. front or back) in the block configuration panel on the right.
Building our Example
To build our example we’ll start by configuring the camera block. In our example the camera is supposed to start rolling as soon as our application is loaded. To this end, we’ll use the Startup Trigger block (from the block menu on the left) and connect it to the camera block’s start port. Moreover, when the user presses the “Snap!” button we’ll trigger the camera block to take a picture. This first part of our example is shown below.
Once the camera block produces an image we want to showcase this in our second screen. To do so we’ll connect the output port of our camera block with a NavigateToPage interaction. This ensures that we navigate to the second screen as soon as the image has been taken. Moreover, we also connect the output port of the camera block to the input port of the image block. This last part makes sure that the freshly taken image is rendered in the rectangle on our second screen. The Figure below provides an overview of how this last part of our example is built.
Result in a Web App
The gif below shows our example application, live from a browser. Picture of our co-founder included!