Environment Mesh: Blend the Real with the Virtual

Photo by UNIBOA on Unsplash

Environment Mesh: Blend the Real with the Virtual

Augmented reality (AR) is now widely used in a diverse range of fields, to facilitate fun and immersive experiences and interactions. Many features like virtual try-on, 3D gameplay, and interior design, among many others, depend on this technology. For example, many of today's video games use AR to keep gameplay seamless and interactive. Players can create virtual characters in battle games, and make them move as if they are extensions of the player's body. With AR, characters can move and behave like real people, hiding behind a wall, for instance, to escape detection by the enemy. Another common application is adding elements like pets, friends, and objects to photos, without compromising the natural look in the image.

However, AR app development is still hindered by the so-called pass-through problem, which you may have encountered during the development. Examples include a ball moving too fast and then passing through the table, a player being unable to move even when there are no obstacles around, or a fast-moving bullet passing through and then missing its target. You may also have found that the virtual objects that your app applies to the physical world look as if they were pasted on the screen, instead of blending into the environment. This can to a large extent undermine the user experience and may lead directly to user churn. Fortunately there is environment mesh in HMS Core AR Engine, a toolkit that offers powerful AR capabilities and streamlines your app development process, to resolve these issues once and for all. After being integrated with this toolkit, your app will enjoy better perception of the 3D space in which a virtual object is placed, and perform collision detection using the reconstructed mesh. This ensures that users are able to interact with virtual objects in a highly realistic and natural manner, and that virtual characters will be able to move around 3D spaces with greater ease. Next we will show you how to implement this capability.

Demo

Environment Mesh.gif

Implementation

AR Engine uses the real time computing to output the environment mesh, which includes the device orientation in a real space, and 3D grid for the current camera view. AR Engine is currently supported on mobile phone models with rear ToF cameras, and only supports the scanning of static scenes. After being integrated with this toolkit, your app will be able to use environment meshes to accurately recognize the real world 3D space where a virtual character is located, and allow for the character to be placed anywhere in the space, whether it is a horizontal surface, vertical surface, or curved surface that can be reconstructed. You can use the reconstructed environment mesh to implement virtual and physical occlusion and collision detection, and even hide virtual objects behind physical ones, to effectively prevent pass-through.

Environment mesh technology has a wide range of applications. For example, it can be used to provide users with more immersive and refined virtual-reality interactions during remote collaboration, video conferencing, online courses, multi-player gaming, laser beam scanning (LBS), metaverse, and more.

Integration Procedure

Ensure that you have met the following requirements on the development environment:

  • JDK: 1.8.211 or later

  • Android Studio: 3.0 or later

  • minSdkVersion: 26 or later

  • targetSdkVersion: 29 (recommended)

  • compileSdkVersion: 29 (recommended)

  • Gradle version: 6.1.1 or later (recommended)

Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.

If you need to use multiple HMS Core kits, use the latest versions required for these kits.

Preparations

i. Before getting started, you will need to register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.

ii. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.

iii. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.

iv. The following takes Gradle plugin 7.0 as an example:

Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.

Go to buildscript > repositories and configure the Maven repository address for the SDK.

buildscript {
     repositories {
         google()
         jcenter()
         maven {url "https://developer.huawei.com/repo/" }
     }
}

Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.

dependencyResolutionManagement {
    repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
      repositories {
           repositories {
                google()
               jcenter()
               maven {url "https://developer.huawei.com/repo/" }
           }
       }
}

v. Add the following build dependency in the dependencies block.

dependencies {
    implementation 'com.huawei.hms:arenginesdk:{version}
}

Development Procedure

i. Initialize the HitResultDisplay class to draw virtual objects based on the specified parameters.

Public class HitResultDisplay implements SceneMeshComponenDisplay{
 // Initialize VirtualObjectData.
 VirtualObjectData mVirtualObject = new VirtualObjectData();
// Pass the context to mVirtualObject in the init method.
 Public void init(Context context){
  mVirtualObject.init(context);
/ / Pass the material attributes.
  mVirtualObject.setMaterialProperties();
 }
// Pass ARFrame in the onDrawFrame method to obtain the estimated lighting.
 Public void onDrawFrame(ARFrame arframe){
// Obtain the estimated lighting.
  ARLightEstimate le = arframe.getLightEstimate();
// Obtain the pixel intensity of the current camera field of view.
  lightIntensity = le.getPixelIntensity();
// Pass data to methods in mVirtualObject.
  mVirtualObject.draw(…,…,lightIntensity,…);
// Pass the ARFrame object in the handleTap method to obtain the coordinate information.
  handleTap(arframe);
 }
     // Implement the handleTap method.
     Private void handleTap(ARFrame frame){
        // Call hitTest with the ARFrame object.
        List<ARHitResult> hitTestResults = frame.hitTest(tap);
        // Check whether a surface is hit and whether it is hit in a plane polygon.
        For(int i = 0;i<hitTestResults.size();i++){
            ARHitResult hitResultTemp = hitTestResults.get(i);
            Trackable = hitResultTemp.getTrackable();
            If(trackable instanceof ARPoint && ((ARPoint) trackable).getOrientationMode() == ARPoint.OrientationMode.ESTIMATED_SURFACE_NORMAL){
                isHasHitFlag = true;
                hitResult = hitResultTemp;
            }
        }
     }
}

ii. Initialize the SceneMeshDisplay class to render the scene network.

Public class SceneMeshDiaplay implements SceneMeshComponenDisplay{
// Implement openGL operations in init.
 Public void init(Context context){}
// Obtain the current environment mesh in the onDrawFrame method.
 Public void onDrawFrame(ARFrame arframe){
  ARSceneMesh arSceneMesh = arframe.acquireSceneMesh();
// Create a method for updating data and pass arSceneMesh to the method.
  updateSceneMeshData(arSceneMesh);
         // Release arSceneMesh when it is no longer in use.
         arSceneMesh.release();
 }
     // Implement this method to update data.
     Public void updateSceneMeshData(ARSceneMesh sceneMesh){
         // Obtain an array containing mesh vertex coordinates of the environment mesh in the current view.
         FloatBuffer meshVertices = sceneMesh.getVertices();
         // Obtain an array containing the indexes of the vertices in the mesh triangle plane in the current view.
         IntBuffer meshTriangleIndices = sceneMesh.getTriangleIndices();
     }
}

iii. Initialize the SceneMeshRenderManager class to provide render managers for external scenes, including render managers for virtual objects.

public class SceneMeshRenderManager implements GLSurfaceView.Render{
// Initialize the class for updating network data and performing rendering.
 private SceneMeshDisplay mSceneMesh = new SceneMeshDisplay();
// Initialize the class for drawing virtual objects.
 Private HitResultDisplay mHitResultDisplay = new HitResultDisplay();

// Implement the onSurfaceCreated() method.
 public  void  onSurfaceCreated(){
// Pass context to the mSceneMesh and mHitResultDisplay classes.
  mSceneMesh.init(mContext);
  mHitResultDisplay.init(mContext);
} 

// Implement the onDrawFrame() method.
 public void onDrawFrame(){
// Configure camera using the ARSession object.
  mArSession.setCameraTexTureName();
  ARFrame arFrame = mArSession.update();
  ARCamera arCamera = arframe.getCamera();
// Pass the data required for the SceneMeshDisplay class.
  mSceneMesh.onDrawFrame(arframe,viewmtxs,projmtxs);
}
}

iv. Initialize the SceneMeshActivity class to implement display functions.

public class SceneMeshActivity extends BaseActivity{
// Provides render managers for external scenes, including those for virtual objects.
 private ScemeMeshRenderManager mSceneMeshRenderManager;
// Manages the entire running status of AR Engine.
 private ARSession mArSession;
// Initialize some classes and objects.
 protected void onCreate(Bundle savedInstanceState){
 mSceneMeshRenderManager = new SceneMeshRenderManager();
}
// Initialize ARSession in the onResume method.
protected void onResume(){
// Initialize ARSession.
 mArSession = new ARSession(this.getApplicationContext());
// Create an ARWorldTrackingConfig object based on the session parameters.
 ARConfigBase config = new ARWorldTrackingConfig(mArSession);
// Pass ARSession to SceneMeshRenderManager.
 mSceneMeshRenderManager.setArSession(mArSession);
// Enable the mesh, and call the setEnableItem method using config.
config.setEnableItem(ARConfigBase.ENABLE_MESH | ARConfigBase.ENABLE_DEPTH);
}
}

Conclusion

AR bridges the real and the virtual worlds, to make jaw-dropping interactive experiences accessible to all users. That is why so many mobile app developers have opted to build AR capabilities into their apps. Doing so can give your app a leg up over the competition.

When developing such an app, you will need to incorporate a range of capabilities, such as hand recognition, motion tracking, hit test, plane detection, and lighting estimate. Fortunately, you do not have to do any of this on your own. Integrating an SDK can greatly streamline the process, and provide your app with many capabilities that are fundamental to seamless and immersive AR interactions. If you are not sure how to deal with the pass-through issue, or your app is not good at presenting virtual objects naturally in the real world, AR Engine can do a lot of heavy lifting for you. After being integrated with this toolkit, your app will be able to better perceive the physical environments around virtual objects, and therefore give characters the freedom to move around as if they are navigating real spaces.

References

HUAWEI Developers

AR Engine Development Guide

Software and Hardware Requirements of AR Engine Features

AR Engine Sample Code