Intermediate Tutorial


Augmented Reality Using Amazon Sumerian, AWS Amplify, and 8th Wall


60 minutes

Posted on: September 7, 2019

Learn Sumerian
Augmented Reality Using Amazon Sumerian, AWS Amplify, and 8th Wall

Tags

AR
augmented reality
web-based AR
video
streaming
s3

In this tutorial you will learn about:

Creating an AR application using Amazon Sumerian
AWS Amplify
and 8th Wall

In this tutorial, you create content in Amazon Sumerian and publish a simple browser-based augmented reality (AR) experience to place a video in AR on top of your business card. This is sharable via a URL in supported browsers (for example, Chrome on an Android device or Safari on an iOS device).

To gain an understanding of the real-world environment, AR applications use technologies such as simultaneous localization and mapping (SLAM) and image tracking (finding and tracking an image in the real world). At the time of the writing of this tutorial, the pending WebXR specification promises to expose platform SLAM technology to browsers, but is still under development. Until it’s released and supported by browsers, you have to choose your own JavaScript SLAM implementation to create web-based AR applications. Alternatively, you can build native Android Sumerian AR applications or iOS Sumerian AR applications. The advantage of creating a web-based AR application, however, is that it is directly sharable by using a URL.

In this tutorial, we use the third-party, commercially available 8th Wall library for SLAM and image tracking.

Note: To complete this tutorial, you must have completed or be familiar with the steps in the Getting Started with Amazon Sumerian, AWS Amplify, and the React Component tutorial.

You’ll learn about:

  • Authoring an AR-viewable scene
  • Publishing a scene privately
  • Sumerian Scripting API
  • Using a video file as a texture for display on a surface
  • Integrating the 8th Wall JavaScript library
  • Tracking and anchoring an entity to an image
  • Publishing an React app to support user authentication

Prerequisites

Before you begin, you should have completed the following tasks and tutorials:

Additionally, you will need a short video clip in .mp4 format, smaller than 50MB, to use as the video to play on your AR business card. If you don’t have such a clip, you can use this clip of the Amazon Sumerian Logo.

Step 1: Start a Project in Sumerian

  1. From the Dashboard, navigate to the scene templates.

  2. Choose the Augmented Reality scene template.

  3. In the New Scene dialog box, choose a descriptive name for your scene, and then choose Create.

Step 2: Create a Quad Entity with a Video Texture

  1. Click + Create Entity above the canvas.

  2. Choose Quad and add it to your scene.

    By default, the Quad plane has a scale of (1,1,1). We want to adjust the dimensions of the plane to match the dimensions of your video. If the video is in a typical “cinematic” aspect ratio, for example, the ratio of width to height will be 16:9.

  3. With the Quad selected, expand the Transform component on the right, check that the Uniform Scale property is off, and set the Scale to (1.6, 0.9, 1.0). This will scale the dimensions of the Quad to match “cinematic” video dimensions.

  4. With Quad still selected, expand the Details section of the top-most panel in the Inspector and edit the Name property. Rename the Quad to “videoQuad”.

Find a video file you would like to use for this tutorial. Try to make this video as small as possible to minimize your scene download times and storage costs. For example, in this tutorial the video will rarely take up more than half of the screen on a mobile device. Therefore we should reduce our video width to be about 512 pixels wide, since a mobile device in landscape mode is in the range of 800 pixels wide. In any case, also keep in mind that a single file upload size is limited to 50 MB in Amazon Sumerian. If you want to use longer or larger videos, consider following the Streaming Video from Amazon S3 tutorial.

  1. Drag your video file from your computer onto the Assets panel and wait for your video to upload.

  2. With the videoQuad selected, expand the Material Panel and drag the created video texture from the Assets panel to the Material > COLOR (DIFFUSE) > Texture drop area

Step 3: Add a Script to Start Video Playback

Some browsers prevent videos from starting automatically, sometimes referred to as “autoplay blocking”. When this happens, a user gesture is needed in order to ask for permission to autoplay. It’s a best practice to check the value of the Promise returned from a video play attempt to see if autoplay blocking is active, and to seek out a gesture if it is. The following script will do this and emit a signal when autoplay is blocked. You’ll have to then set up an autoplay blocked response. For more details about how to respond to autoplay blocking, please see the Streaming Video from Amazon S3 tutorial.

  1. Select the videoQuad entity you created in Step 2.

  2. In the Inspector panel, add a Script component by choosing + Add Component then Script.

  3. Expand the Script component and add a script by clicking the + (plus icon) next to the drop input.

  4. Choose Custom (Legacy Format) from the menu that opens.

  5. Edit the script by clicking the pencil icon in the script’s panel.

  6. In the Text Editor, click the pencil icon next to Script in the Documents area on the left of the window, and then rename the script “playVideo”.

  7. In the lower left of the Text Editor, choose Save .

  8. Replace the contents of the playVideo script with the following.

     'use strict';
    
     // The sumerian object can be used to access Sumerian engine
     // types.
     //
     /* global sumerian */
    
     // Called when play mode starts.
     //
     function setup(args, ctx) {
       // This assumes you've uploaded a video as a Sumerian texture and assigned it to the Diffuse texture map on
       // the material of the entity this script is on
       ctx.entityData.video = ctx.entity.meshRendererComponent.materials[0]._textureMaps.DIFFUSE_MAP.image;
    
       ctx.entityData.onPlayVideo = (event) => {
         // Attempt to play video. This may be blocked by the 'autoplay' policy of
         // the browser. If so, we need to capture a gesture from the user before
         // attempting to play(). We do so by emitting a message which a Behavior will
         // listen for to obtain that gesture.
         const playPromise = ctx.entityData.video.play();
         if (playPromise !== undefined) {
           playPromise.then( () => {
             // Autoplay started
             console.log(`play was successful on ${ctx.entityData.video.src}`);
             // reset the video playhead to the beginning
             ctx.entityData.video.currentTime = 0;
           }).catch(error => {
             // Autoplay was prevented. Emit a signal to obtain a user gesture. You'll need to create
             // a state machine to respond to this signal and then emit a 'PlayVideo' signal - please
             // see https://docs.sumerian.amazonaws.com/tutorials/create/beginner/s3-video/ for an example
             // of how to do this.
             console.log('video Autoplay blocked - emitting "VideoAutoplayBlocked" signal. Respond to this message by obtaining user gesture and then emitting a "PlayVideo" signal.');
             sumerian.SystemBus.emit('VideoAutoplayBlocked');
           });
         }
         sumerian.SystemBus.emit('PlayVideo');
       };
    
       // set up a listener to attempt to play the video and call it
       sumerian.SystemBus.addListener('PlayVideo', ctx.entityData.onPlayVideo);
       sumerian.SystemBux.emit('PlayVideo');
     }
    
     // Called when play mode stops.
     //
     function cleanup(args, ctx) {
       sumerian.SystemBus.removeListener('PlayVideo', ctx.entityData.onPlayVideo);
     }
    
  9. In the lower left of the Text Editor, choose Save.

Step 4: Add a Script to Anchor the videoQuad Entity in AR

In this step, you create a script that responds to xrimageupdated events from 8th Wall to anchor your videoQuad entity to a tracked image in the real world.

  1. The the videoQuad still selected, create a new Script and name it “imageTargetAnchor”, using the same process from items 3 through 7 as you did in the previous step 3. Replace the default script with the following:

     'use strict';
     function setup(args, ctx) {
       ctx.firstUpdate = true;
       ctx.imageFound = false;
       ctx.baseRotationMatrix = new sumerian.Matrix3(sumerian.Matrix3.IDENTITY);
       ctx.matrix = new sumerian.Matrix3();
       ctx.quatMatrix = new sumerian.Matrix3();
       ctx.quaternion = new sumerian.Quaternion();
       ctx.baseScale = new sumerian.Vector3(1,1,1);
       ctx.baseTranslation = new sumerian.Vector3(0,0,0);
       ctx.entity.hide();
    
       ctx.worldData.onImageFound = event => {
         ctx.imageFound = true;
         if (ctx.entity.isHidden) {
           ctx.entity.show();
         }
       };
    
       ctx.worldData.onImageLost = event => {
         ctx.imageFound = false;
       };
    
       ctx.worldData.onImageUpdated = event => {
         // Fired when an image's location is updated, either by SLAM or by image tracking. We only
         // want to update the imageTargetAnchor for image tracking, indicated by ctx.imageFound.
         if(!ctx.firstUpdate && ctx.imageFound) {
           // Rotation
           ctx.quaternion.set(event.rotation.x, event.rotation.y, event.rotation.z, event.rotation.w);        
           ctx.quatMatrix.copyQuaternion(ctx.quaternion)
           ctx.matrix.mult(ctx.quatMatrix, ctx.baseRotationMatrix);
           ctx.entity.transformComponent.setRotationMatrix(ctx.matrix);        
    
           // Translation
           ctx.entity.transformComponent.setTranslation(ctx.baseTranslation.x + event.position.x, ctx.baseTranslation.y + event.position.y, ctx.baseTranslation.z + event.position.z);
    
           // Scale
           ctx.entity.transformComponent.setScale(event.scale * ctx.baseScale.x, event.scale * ctx.baseScale.y, event.scale * ctx.baseScale.z);
         }
       };
    
       // See https://docs.8thwall.com/web/#sumerian-events for additional
       // available 8th Wall events.
       sumerian.SystemBus.addListener('xrimagefound', ctx.worldData.onImageFound);
       sumerian.SystemBus.addListener('xrimagelost', ctx.worldData.onImageLost);
       sumerian.SystemBus.addListener('xrimageupdated', ctx.worldData.onImageUpdated);
     }
    
     function update(argx, ctx) {
       if (ctx.firstUpdate) {
         // Stash the unmodified entity's scale and rotation to add on to the
         // image target's location during the xrimageupdated callback
         ctx.firstUpdate = false;
         ctx.baseRotationMatrix.copy(ctx.entity.transformComponent.getRotationMatrix());
         ctx.baseScale.set(ctx.entity.transformComponent.getScale());
         ctx.baseTranslation.set(ctx.entity.transformComponent.getTranslation());
       }
     }
    
     function cleanup(argx, ctx) {
       sumerian.SystemBus.removeListener('xrimagefound', ctx.worldData.onImageFound);
       sumerian.SystemBus.removeListener('xrimagelost', ctx.worldData.onImageLost);
       sumerian.SystemBus.removeListener('xrimageupdated', ctx.worldData.onImageUpdated);
     }
    
  2. Under the Camera component, make sure the AR Camera entity is set as the Main Camera.

  3. With the AR Camera still selected, in the Transform component, enter a Translation value of 0.4 in Y. 8th Wall uses the camera height to effectively scale virtual content and cannot be zero (see 8th Wall’s troubleshooting guide for more information).

  4. Delete the Default Camera and any other cameras in your Entities panel. You want the AR Camera to be the only camera in your scene.

Step 4: Set Up a React app created in AWS Amplify

Follow the tutorial Getting Started with Amazon Sumerian, AWS Amplify, and the React Component to set up a React web app to host your scene privately.

Be aware of the following as you complete that tutorial:

  1. Use the scene you created in the Publish the Sumerian Scene Privately and Add It to the Amplify Project section of that tutorial.

  2. When you test the React app locally on your computer, the Main Camera won’t be placed correctly because your computer doesn’t have a rear-facing camera and the orientation sensors found in AR devices. The Main Camera will be correctly placed when we test on an AR device, later in this tutorial.

  3. In the tutorial’s final Running on a VR or AR Device, in which you add hosting and publish your React app, choose PROD. This is because you will need your app to be served over HTTPS for the 8th Wall library to work correctly.

Step 5: Create an 8th Wall Developer Account

Now that we have our AR scene hosted privately in an React app, we integrate the 8th Wall JavaScript library to take care of image tracking and SLAM. It’s free to set up an 8th Wall developer account and to test your AR web app locally (see the 8th Wall pricing plans here).

Complete the following sections of the 8th Wall setup tutorial:

  1. Create a ‘Web Developer’ account.
  2. Create an app key. Copy the app key created in this process. You will need it in Step 7.
  3. Authorize your AR device.

Step 6: Add 8th Wall Image Targets

In this step, you upload an image on to which the imageTargetAnchor script you added in Step 3 will anchor Sumerian entities. This can be any image. For this tutorial, try using your business card or something similar.

  1. Follow the 8th Wall documentation to add an image target. You can add more than one image if you want, for example, if you want your video quad to be anchored to your business card and your driver’s license.

  2. Crop your image so that it doesn’t have empty space around it (you can do this in the 8th Wall interface as you upload the image).

Step 7: Integrate 8th Wall into the React app

  1. In your React app directory, open public/index.html. Then add the following line between the <head> ... </head> tags, and replace APP_KEY with the app key you copied from Step 5.
     <head>
     ...
       <script async src="https://apps.8thwall.com/xrweb?appKey=APP_KEY"></script>
     ...
     </head>
    
  2. Add code to initialize the 8th Wall library. In src/App.js, replace the line import Amplify from 'aws-amplify'; and the App class definition with the following.

     // ...
     import Amplify, {XR as awsXR} from 'aws-amplify';
     // ...
    
     class App extends Component {
       render() {
         return (
           <div id="sumerian-scene-dom-id" style={ {height: '100vh'} }>
             <p id="loading-status">Loading...</p>
           </div>
         );
       }
    
       componentDidMount() {
         this.loadAndStartScene();
       }
    
       async loadAndStartScene() {
         await awsXR.loadScene('scene1', 'sumerian-scene-dom-id');
    
         const world = awsXR.getSceneController('scene1').sumerianRunner.world;
    
         window.sumerian.SystemBus.addListener('xrerror', (params) => {
           // Add error handling here
         });
    
         window.sumerian.SystemBus.addListener('xrready', () => {
           // Both the Sumerian scene and XR8 camera have loaded. Dismiss loading status
           const loadingStatus = window.document.getElementById('loading-status');
           if (loadingStatus && loadingStatus.parentNode) {
             loadingStatus.parentNode.removeChild(loadingStatus);
           }
         });
    
         window.XR8.Sumerian.addXRWebSystem(world);
    
         awsXR.start('scene1');
       }
     };
    

    Note: This code assumes you named your Sumerian scene scene1 during the amplify add xr step in _Step 4: Set up a React app created in AWS Amplify_. If you named your scene something something other than scene1, you have to update the code with this name.

Step 8: Build, Deploy, and Test

Build and deploy your React app created in AWS Amplify, as follows.

    amplify publish --invalidateCloudFront

This builds and deploys your app, and gives you a URL in the terminal window to view your React app on your AR device. The --invalidateCloudFront forces the Amazon CloudFront cache to be cleared for the publish. This is needed if you’re iterating and publishing in succession.

Optional: Debugging on AR Devices

To debug issues, you might find that you need web developer tools such as the JavaScript console and debugger. On Android and iOS, web developer tools can remotely debug mobile devices. Simply connect the AR mobile device to your computer via a USB cable.

  • For instructions on remote debugging on Android using Chrome, see this documentation.
  • For remote debugging on iOS using Safari, connect your mobile device to your computer with a USB cable, and then open Safari’s Develop menu and select the AR device from the Device List.

Optional: Interaction Improvement - Do Not Play Hidden Video

As an exercise, see if you can modify the videoPlayback script so that instead of calling video.play() on load, it waits for the first xrimagefound event to start the video when the image target is first detected.

You should now have a much better understanding of how to create an AR application using AWS Amplify and 8th Wall. To learn more, check out the following tutorials:

Back to Tutorials

© 2019 Amazon Web Services, Inc or its affiliates. All rights reserved.