AI-powered Content Creation¶
Overview¶
In version 9.6, we introduced an AI Assistant capable of answering a wide variety of questions. Now, we are taking the AI Assistant to the next level by enabling it to automatically generate content for you based on a chosen subject.
This new feature offers two main experiences: Create Experience and Explore Experience. Each experience provides a unique and engaging way to learn and interact with content.
AI-Generated Content and Show-and-Tell Presentations¶
The AI Assistant generates content and enhances the learning experience by running interactive Show and Tell presentations. While explaining the content, the AI Assistant can point to objects or parts within the scene, adding an extra spatial dimension of understanding.
These presentations are conversational and allow for user input. The AI Assistant will pause and request your feedback or give you time to explore information panels, watch videos, or otherwise engage with the content before moving on to the next segment.
Assessments and Customization¶
Upon completing a presentation, the AI Assistant uses artificial intelligence to generate assessments, such as quizzes, to help you test your understanding of the subject matter. Thus you can both create and learn simultaneously.
After the AI Assistant finishes the process, you have the option to further customize the content before saving and sharing it with students or peers. This step is crucial, as the AI may occasionally suggest an incorrect image or video for a topic. You can easily edit the content and choose alternative media if needed.
Create Experience¶
The Create experience focuses on explorative learning and understanding a given subject. You’ll learn about the composition, relevant procedures, and functions of the chosen subject. For example, if you select a jet engine, you’ll learn about its components, their functions, and the procedures for maintenance or disassembly.
Explore Experience¶
The Explore experience is designed for discovering and learning about items or places in your physical environment. After completing an Explore session, you’ll have annotated your surroundings with persistent knowledge portals. These location-based augmented reality (AR) experiences can be used in various settings, such as museums, tourist destinations, industrial plants, maintenance, repair, and operations (MRO), social media, events, and more.
Like the Create experience, the AI Assistant will automatically generate relevant content based on the names you assign to objects. It will also explore and create related sub-elements to provide a deeper understanding of the main topic.
Getting Started¶
1. Launching the new conversational AI assistant¶
When you launch the EON-XR with a valid MVB license and are logged in, you will be taken directly to AR mode, where you can start your creation process.
2. Anchoring the assistant avatar¶
First, you will be requested to place the AI assistant on the floor in front of you. Choose a planar surface a few meters in front of you and tap the screen to anchor the avatar.
3. Choosing an Experience¶
After the avatar is placed, it will ask you what you want to do. You can choose between:
Create an explorative learning experience based on a subject you choose.
Explore the physical environment by annotating objects or places around you and then let AI find and attach information panels (knowledge portals) on them, which will persist and can be experienced later by you or other people.
Browse the Xperience Library to load and view saved experiences.
- Skip this AI-powered conversational assistant creation process and go to
manual mode to create the experience without help from the AI assistant.
To choose, either tap one of the buttons on the AI portal, or you can use the experimental natural language processing to say your choice. The natural language input is quite flexible; as long as you say something that resembles one of the choices, it will understand and execute the choice for you. If it cannot understand your request, it will say so, and you can tap the microphone button to try again, or use the buttons to choose.
Create Experience¶
1. Choosing a Subject¶
For the Create option, you will next choose a subject for the experience by browsing from an extensive list of 800+ curated topics ordered into logical segments and categories.
Alternatively, you can also input your own custom subject by tapping the microphone button.
Note
If you have used the natural language processing input to choose type of experience to create (step 4 above), you can say the subject in the same sentence, and it will skip this step. E.g. you can say: “I want to create an Experience about jet engine”, and it will pick up that you want to choose the Create Experience and the subject is jet engine.
2. Single-Object or Multi-Object Experience¶
Once a subject is chosen, the AI assistant will further ask if you want to create a single object or multi-object experience. The latter means you will be able to manipulate and handle the items in the scene separately.
Note
If you have chosen a custom topic, the multi-object choice is not available, so you will be taken directly to the next step.
3. Single Object Experience (Main Components)¶
For the single object experience, the AI assistant will:
Introduce you to the subject while showing an image representing the subject.
Locate and show a fun fact about the subject.
Suggest a video you can watch to further learn about the subject.
When you are ready to continue, select Continue on the AI portal to go to next step.
3.1 Choosing a 3D Model¶
In this step, it will ask you to choose a 3D model for the experience. If you cannot find a suitable 3D model, you can also use an image. After the 3D model is chosen, the AI will pick 10 top sub-elements related to the subject and populate them as annotations floating above the model.
3.2 Watching the Show & Tell Presentations¶
When you have picked a 3D model to represent the subject, you can learn more about the topic by choosing one of the following Show & Tell presentation to watch:
Parts: this presentation focuses on the composition of the subject and will go through each sub-element and briefly talk about them.
Process: this presentation will choose a procedure related to the subject and talk about that instead, highlighting the sub-elements if they are used in the procedure
Hint
You can watch both presentations if you want, by simply going back to the AI portal and tapping the other presentation button.
3.3 Exploring Knowledge Portals and Assessments¶
After the presentation, you can freely explore the knowledge portals of each element or try out the assessments that the AI assistant has prepared for you based on the elements and subject presented so far.
To start an assessment, tap on the corresponding hyperlink in the AI portal and then follow the onscreen instructions to complete the task.
4 Multi-Object Procedure¶
If you have chosen a subject from the topics menu, you have the choice of learning about the main composition and a process of a single object or a procedure involving multiple objects. If you choose the latter, the assistant will use AI to find a suitable procedure that you can learn. The procedure is related to the subject you have chosen and involves up to 5 objects.
4.1 Adding Objects for Multi-Object Procedure¶
After listening to an introduction of the chosen procedure, you are requested to add these objects one by one. Tap the Add button and use the 3D asset dialog to locate a suitable 3D model to represent the object in the list. If you cannot find a suitable 3D model, you can also choose a 2D image instead, or you can select Cancel to skip adding a 3D model to represent this object.
4.2 Multi-Object Procedure Presentation¶
After you have gone through the list, the multi-object procedure presentation will start. The AI assistant will describe the procedure and point to and highlight the object involved in that step of the procedure.
Explore Experience¶
The Explore experience allows you to create a location-based AR experience that persists over time. This means you can annotate objects in your physical environment, and the assistant will use AI to populate it with information and knowledge. The next time you or someone revisits this place, you can load the same experience and see all the annotations popping up in the locations you have set.
Note
To create these AR experiences, an Android device supporting AR and Depth API is required. For iPhone and iPad (iOS), a LiDAR sensor is required, meaning you need the Pro models of these devices.
Creating an Explore Experience¶
Here are the steps to create such an experience:
Persistent Anchor: Follow the onscreen instructions to create a persistent anchor. Choose a point close to the objects to increase the positioning precision.
Annotating Objects: Go to an object of interest and choose one of the methods to annotate:
Microphone button: Annotate using voice input.
Camera AI button: Annotate using image recognition. Take a picture of the object, and the app will locate images on the internet that resemble this image most closely. Pick one that you think is most similar to the object, and the app will use the image caption as annotation. Sometimes the caption is very long and contains many words. You can then use the buttons to deselect those words you want to remove, to make the annotation shorter.
Keyboard: Use the onscreen keyboard to input the annotation text directly.
Note
The annotation point will follow the curvature of the surface of the physical environment. The detection point is of limited range (up to 3m), so walk closer to the object if the surface annotation point does not appear. Sweep around with the mobile to choose the annotation point. The moment you select one of the annotation methods above, the annotation point will be locked and cannot be changed anymore.
AI Assistant Presentation: Based on the annotation you have set, the AI assistant will run a short presentation of this subject (annotation = subject), with an image, a fun fact, and a suggested video to watch.
Discovering Sub-Elements: Select Continue on the AI portal to allow the AI assistant to discover and suggest up to 10 sub-elements related to this main topic. These are displayed under the top main annotation.
Show & Tell Presentation: Next, the AI assistant will run a Show & Tell presentation to explore and describe briefly each of these sub-elements.
Note
You can use the expand toggle button on the right side of the top main annotation to show/hide these sub-annotations.
Adding More Annotations or Exploring Elements: When the presentation is finished, you can choose to add another annotation or explore the elements further. You can also choose not to add any more annotations, at which point you will exit this Explore Experience mode.
Browsing the Xperience Library¶
You can access the Xperience Library to load and view saved experiences created by you or others. To browse the library, follow these steps:
From the main menu, choose the Library option.
Browse through the available experiences using the on-screen navigation options.
Select an experience to load and view it.
Note
You can also search for specific experiences using the search bar at the top of the Xperience Library screen.
Manual Mode¶
If you prefer to create learning experiences without the assistance of the AI assistant, you can opt for Manual Mode. To access Manual Mode, choose the Skip option from the main menu when selecting an experience type.
In Manual Mode, you can create and customize your learning experiences using a more traditional, hands-on approach, see Metaverse Builder
Future Improvements¶
As we continue to enhance this feature, we will offer additional ways of using AI to help you create various types of learning experiences. Stay tuned for more updates on this exciting AI-driven content creation tool.