1

Magic Leap 2 – Pricing Released


Magic Leap 2 Base

$3,299 (US only)

Magic Leap 2 Base targets professionals and developers that wish to access one of the most advanced augmented reality devices available. Use in full commercial deployments and production environments is permitted. The device starts at an MSRP $3,299 USD (US only) and includes a 1-year limited warranty.


Magic Leap 2 Developer Pro

$4,099 (US only)

Magic Leap 2 Developer Pro provides access to developer tools, sample projects, enterprise-grade features, and monthly early releases for development and test purposes. Recommended only for internal use in the development and testing of applications. Use in full commercial deployments and production environments is not permitted. Magic Leap 2 Developer Pro will start at an MSRP $4,099 USD (US only) and includes a 1-year limited warranty.


Magic Leap 2 Enterprise

$4,999 (US only)

Magic Leap 2 Enterprise is targeted for environments that require flexible, large scale IT deployments and robust enterprise features. This tier includes quarterly software releases fully manageable via enterprise UEM/MDM solutions. Use in fully commercial deployments and production environments is permitted. Magic Leap 2 Enterprise comes with 2 years of access to enterprise features and updates and will start at an MSRP $4,999 USD (US only) and includes an extended 2-year limited warranty.

Most Immersive

Magic Leap 2 is the most immersive AR device on the market. It features industry leading optics with up to 70° diagonal FOV; the world’s first dynamic dimming capability; and powerful computing in a lightweight ergonomic design to elevate enterprise AR solutions.

Built for Enterprise

Magic Leap 2 delivers a full array of capabilities and features that enable rapid and secure enterprise deployment. With platform-level support for complete cloud autonomy, data privacy, and device management through leading MDM providers, Magic Leap 2 offers the security and flexibility that businesses demand.

Empowering Developers

Magic Leap 2’s open platform provides choice and ease-of-use with our AOSP-based OS and support for leading open software standards, including OpenGL and Vulkan, with OpenXR and WebXR coming in 2H 2022. Our platform also supports your choice of engines and tools and is cloud agnostic. Magic Leap 2’s robust developer portal provides the resources and tools needed to learn, build, and publish innovative solutions.




AREA Human Factors Group Developing an AR & MR Usability Heuristic Checklist

Usability is an essential prerequisite for any successful AR application. If any aspect of the application – from the cognitive impact on the user to the comfort of the AR device – has a significant negative impact on usability, it could discourage user acceptance and limit projected productivity gains and return-on-investment.

But how can organizations pursuing an AR application evaluate a solution’s usability? To answer that question, the AREA Human Factors Committee has undertaken the development of an AR and MR Usability Heuristic Checklist. Driven by Jessyca Derby and Barbara S. Chaparro of Embry-Riddle Aeronautical University and Jon Kies of Qualcomm, the Checklist is intended to be used as a tool for practitioners to evaluate the usability and experience of an AR or MR application.

The AR & MR Usability Heuristic Checklist currently includes the following heuristics:

  • Unboxing & Set-Up
  • Help & Documentation
  • Cognitive Overload
  • Integration of Physical and Virtual Worlds
  • Consistency & Standards
  • Collaboration
  • Comfort
  • Feedback
  • User Interaction
  • Recognition Rather than Recall
  • Device Maintainability

The team is in the process of validating these heuristics across a range of devices and applications. So far, they have conducted evaluations with head-mounted display devices (such as Magic Leap and HoloLens), mobile phones, educational applications, and AR/MR games; see their recent journal article for more information.

To further ensure that the breadth of the AR and MR Usability Heuristic Checklist remains valuable across domains and devices, they are in the process of conducting further validation that will consider:

  • Privacy
  • Safety
  • Inclusion, Diversity, and Accessibility
  • Technical aspects of designing for AR and MR (e.g., standards for 3D rendering)
  • Standards for sensory output (e.g., tactile feedback, spatial audio, etc.)
  • Applications that involve multiple users to collaborate in a shared space
  • A range of devices (e.g., AR and MR glasses such as Lenovo’s Think Reality A3)

In the coming months, the team will move on to identifying and obtaining applications and/or hardware that touch on the areas outlined above. They will then conduct heuristic evaluations and usability testing with the applications and/or hardware to further refine and validate the Checklist. The final step will be to establish an Excel-based toolkit that will house the Checklist. This will enable practitioners to easily complete an evaluation and immediately obtain results.

Upon completion of the project, the AR and MR Usability Heuristic Checklist will become a vital resource for any organization considering the adoption of AR. If you would like to learn more or have an idea for an application that could be included in this validation process, please contact Dr. Barbara Chaparro or Jessyca Derby.




Vuforia Engine 10.8

Key updates in this release:

  • Advanced Model Target Improvements: Training times for Advanced Model Targets have been optimized and now depend on the number and size of views. Recognition performance for advanced, close-up views has also been improved.
  • Area Target Improvements:
    • The target’s occlusion mesh is now exposed in the C API which allows native apps to render occluded virtual content in combination with Area Targets as you move through the space.
    • Textured authoring models are now created by the Area Target Creator app and the Area Target Capture API providing an improved authoring experience in Unity. These scans can be loaded into the Area Target Generator for clean-up and post-processing.
    • Area Target tracking data is now compressed and takes up to 60% less space.
  • Unity Area Target Clipping: Area Target previews in the Unity Editor can be clipped based on height, for faster previewing and better visibility of the scanned space during app development.
  • Engine Developer Portal (EDP) Self-Service OAuth UI: OAuth Engine credentials can now be managed from the EDP, eliminating the need for the command line.
  • Notices
    • High Sensor-Rate Permission: Due to new Android permission requirements, developers should add the “high sensor rate” permission to all native Vuforia Engine apps running on Android 12+ for all releases, otherwise VISLAM may not work. Read more about VISLAM tracking here.
    • Permission Handling: The Vuforia Engine behavior of triggering OS-specific user permission requests at runtime is deprecated in 10.8 and will be removed in an upcoming release.  All native apps should be updated to manage permissions themselves. The 10.8 sample apps share best practices for this.



Magic Leap and NavVis Announce Strategic Partnership to Enable 3D Mapping and Digital Twin Solutions in the Enterprise

Combining Magic Leap’s advanced spatial computing platform with NavVis’s mobile mapping systems and spatial data platform, the two companies aim to enhance the use of AR applications across key industries, including automotive, manufacturing, retail and the public sector.

As part of this strategic partnership, NavVis will bring its NavVis VLX mobile mapping system and NavVis IVION Enterprise spatial data platform to Magic Leap’s new and existing enterprise customers with an initial focus on manufacturing. Magic Leap customers will be able to leverage NavVis’s expansive visualization capabilities to generate photorealistic, accurate digital twins of their facilities at unprecedented speed and scale.

The market opportunity for digital twins and other forms of advanced visualization is significant – with demonstrated potential to transform the world of work as we know it. While attention around the potential of the metaverse has put a greater focus on all types of mixed reality technology, AR represents an immediate opportunity for businesses to enhance productivity and improve operational efficiency. Magic Leap’s open, interoperable platform will also enable the metaverse to scale for enterprise applications.

While the Magic Leap 2 platform offers cutting-edge scanning and localization capabilities in real-time on the device itself, NavVis’s technology will allow Magic Leap customers to pre-map and deploy digital twins in large, complex settings that can cover up to millions of square feet – including but not limited to warehouses, retail stores, offices and factories – for a variety of use cases, such as remote training, assistance and collaboration. Such applications will enable companies to reduce operational costs, enhance overall efficiency and democratize the manufacturing workforce of tomorrow.

“We are seeing significant demand for digital twin solutions from our enterprise customer base and are thrilled to partner with NavVis to make our shared vision for large-scale AR applications a reality,” said Peggy Johnson, CEO of Magic Leap. “Coupled with our Magic Leap 2 platform, NavVis’s advanced visualization capabilities will enable high-quality, large-scale and novel AR experiences that business users demand.”

The NavVis partnership is an essential component of Magic Leap’s strategy to cultivate an ecosystem of best-in-class technology partners that will deliver on the promise of enterprise AR, leveraging Magic Leap 2’s powerful, open platform. With a global customer base of more than 400 companies, including the likes of BMW, Volkswagen, Siemens and Audi, NavVis has a proven track record of delivering immediate and long-term value to enterprises looking to modernize their operations.

“Enterprise AR solutions for larger-scale activations will open the door for greater innovation in the workplace,” said Dr. Felix Reinshagen, CEO and co-founder of NavVis. “Our own experience shows that 3D mapping and digital twins are a fundamental foundation for large-scale persistent AR applications. We’re experiencing strong demand across many verticals with industrial manufacturing as a clear front runner. Magic Leap is a world leader in delivering impactful, innovative experiences in these verticals, and we are excited to collaborate with the company to advance this mission and further enable the future of work.”

About Magic Leap

Magic Leap, Inc.’s technology is designed to amplify human potential by delivering the most immersive Augmented Reality (AR) platform, so people can intuitively see, hear, and touch digital content in the physical world. Through the use of our advanced, enterprise-grade AR technologies, products, platforms, and services, we deliver innovative businesses a powerful tool for transformation.

Magic Leap, Inc. was founded in 2010, is proudly headquartered in South Florida, with eight additional offices across the globe.

About NavVis

Bridging the gap between the physical and digital world, NavVis enables service providers and enterprises to capture and share the built environment as photorealistic digital twins. Their SLAM-based mobile mapping systems generate high-quality data with survey-grade accuracy at speed and scale. And with their digital factory solutions, users are equipped to make better operational decisions, boost productivity, streamline business processes, and improve profitability. Based in Munich, Germany, with offices in the United States and China, NavVis has customers worldwide in the surveying, AEC, and manufacturing industries.




Blippar brings AR content creation and collaboration to Microsoft Teams

LONDON, UK – 14 June 2022 – Blippar, one of the leading technology and content platforms specializing in augmented reality (AR), has announced the integration of Blippbuilder, its no-code AR creation tool, into Microsoft Teams.

Blippbuilder, the company’s no-code AR platform, is the first of its type to combine drag and drop-based functionality with SLAM, allowing creators at any level to build realistic, immersive AR experiences. Absolute beginners can drop objects into a project, which when published will stay firmly in place using Blippar’s proprietary surface detection. These experiences will serve as the foundation of the interactive content that will make up the metaverse.

Blippbuilder includes access to tutorials and best practice guides to familiarise users with AR creation, taking them from concept to content. Experiences are built to be engaged with via browser – known as WebAR – removing the friction of, and reliance on dedicated apps or hardware. WebAR experiences can be accessed through a wide range of platforms, including Facebook, Snapchat, TikTok, WeChat, WhatsApp, alongside conventional web and mobile browsers.

Teams users can integrate Blippbuilder directly into their existing workflow. Designed with creators and collaborators in mind, whether they be product managers, designers, creative agencies, clients, or colleagues, organisations can be united in their approach and implementation – all within Teams. The functionality of adaptive cards, single sign-on, and notifications, alongside real-time feedback and approvals,  provides immediate transparency and seamless integration from inception to distribution. The addition of tooltips, support features, and starter projects also allows teams to begin creating straightaway.

“The existing process for creating and publishing AR for businesses, agencies, and brands is splintered. Companies are forced to use multiple tools and services to support collaboration, feedback, reviews, updates, approvals, and finalization of projects,” said Faisal Galaria, CEO at Blippar. “By introducing Blippbuilder to Microsoft Teams, workstreams including team channels and group chats, we’re making it easier than ever before for people to collaborate, create and share amazing AR experiences with our partners at Teams”.

Utilizing the powerful storytelling and immersive capabilities of AR, everyday topics, objects, and content, from packaging, virtual products, adverts, and e-commerce, to clothing and artworks, can be ‘digitally activated’ and transformed into creative, engaging, and interactive three-dimensional opportunities.

Real-life examples include:

  •  Bring educational content to life, enabling collaborative, immersive learning
  •  Visualise and discuss architectural models and plans with clients
  •  Allowing product try-ons and 3D visualization in e-commerce stores
  •  Create immersive onboarding and training content
  •  Present and discuss interior design and event ideas
  •  Bring print media and product packaging to life
  •  Artists and illustrations can redefine the meaning of three-dimensional artworks

In today’s environment of increasingly sophisticated user experiences, customers are looking to move their technologies forward efficiently and collaboratively. Having access to a comprehensive AR creation platform is a feature that will keep Microsoft Teams users at the forefront of their industries. Blippbuilder in Teams is the type of solution that will help customers improve the quality and efficiency of their AR building process.

Blippar also offers a developer creation tool, its WebAR SDK. While Blippbuilder for Teams is designed to be an accessible and time-efficient entry point for millions of new users, following this validation of AR, organisations can progress to building experiences with Blippar’s SDK. The enterprise platform boasts the most advanced implementation of SLAM and marker tracking, alongside integrations with the key 3D frameworks, including A-Frame, PlayCanvas, and Babylon.js.




Factory layout Experience – Theorem Solutions

Optimize designs in immersive XR

The Factory Layout Experience enables a planning or layout engineer, working independently or with a group of colleagues, locally or in remote locations, to optimize Factory layouts through the immersive experience of eXtended Reality (XR) technologies. Seeing your data at full scale, in context, instantly enables you to see the clashes, access issues and missing items which a CAD screen cannot show.

On the shop floor there are literally 1000’s of pieces of equipment- much of it bought in and designed externally. Building designs may only exist as scans or in architectural CAD systems, and robot cells may be designed in specialist CAD systems. There will be libraries of hand tools, storage racks and stillage equipment designed in a range of CAD systems, and product data designed in house in mechanical CAD. To understand the factory and assess changes, all of that has to be put together to get a full picture of where a new line, robot cell or work station will fit.

A catalogue of 3D resources can leverage 2D Factory layouts by being snapped to these layouts to quickly realize a rich 3D layout. Advanced positioning makes it very easy to move, snap and align 3D data. Widely used plant and equipment is readily available, there is no need to design it from scratch for every new layout. Simplified layout tools enable you to position, align and snap layout objects quickly, which can be used by none CAD experts, enabling all stakeholders to be involved in the process, improving communication.

Testing Design and Operational Factors

Human centred operations can be analysed using mannequins that can be switched to match different characteristics. You can test design and operational aspects of a variety of human factors, to determine reachability, access and injury risk situations, ensuring compliance with safety and ergonomic standards.

It enables companies to avoid costly layout redesign by enabling all parties involved to review the layout collaboratively, make or recommend changes, and capture those decisions for later review by staff who could not attend the session.




AREA Issues RFP for Research on AR-Delivered Instructions for High-Dexterity Work

To date, AREA members have funded 10 AR research projects on a wide range of timely topics critical to the adoption of enterprise AR. Now the AREA is pleased to announce a call for proposals for its 11th research project, which will evaluate the effectiveness of AR for delivery of work instructions for tasks requiring high dexterity. Building on prior research, the project will expand the current state of knowledge and shed light on AR support for tasks that involve high levels of variability and flexibility in completion of a set of manual operations, such as that found in composite manufacturing.

This project will answer questions, such as:

  • How does AR for high dexterity tasks differ from other instruction delivery methods?
  • How are users impacted by the delivery of instructions using AR in high dexterity tasks?
  • What are the key factors informing decision-making and driving return-on-investment in delivering work instructions for particularly dexterous, manual tasks?
  • Can AR-assisted work instructions help improve quality, productivity, or waste reduction and/or rework of manufactured parts?

This AREA research project will produce: a report on the efficiency and effectiveness of AR work instruction for tasks requiring high levels of dexterity; a research data set; a video summary highlighting the key findings; and an interactive members-only webinar presenting the research findings to the AREA.

The AREA Research Committee budget for this project is $15,000. Organizations interested in conducting this research for the fixed fee are invited to submit proposals. All proposals must be submitted by 12 noon Eastern Daylight Time on July 1, 2022.

Full information on the project needs, desired outcomes and required components of a winning proposal, including a submission form, can be found here.

If you have any questions concerning this project or the AREA Research Committee, please email the Research Committee.

 




Building an immersive pharma experience with XR technology

In the world of pharma manufacturing, precision is key. To execute flawlessly, pharmaceutical scientists and operators need the proper training and tools to accomplish the task. User-friendly augmented reality (AR) and mixed reality (XR) technology that can provide workflow guidance to operators is invaluable, helping name brand companies get drugs, vaccines, and advanced therapies to patients faster.

AR has been a cost-effective way to improve training, knowledge transfers, and process execution in the lab during drug discovery and in the manufacturing suite during product commercialization. Apprentice’s AR Research Department is now seeing greater demand within the pharma industry for XR software capabilities that allow life science teams to use 3D holograms to accomplish tasks.

For example, operators are able to map out an entire biomanufacturing suite in 3D using XR technology. This allows them to consume instructional data while they work with both hands, or better understand equipment layouts. They can see and touch virtual objects within their environment, providing better context and a much more in-depth experience than AR provides.

Users can even suspend metadata in a 3D space, such as the entrance to a room, so that they can interact with their environment in a much more complete way, with equipment, objects and instruments tethered to space. Notifications regarding gowning requirements or biohazard warnings for example will automatically pop up as the operator walks in, enriching the environment with information that’s useful to them.

“It’s all about enhancing the user experience,” Linas Ozeratis, Mixed Reality Engineer at Apprentice.io. “At apprentice, our AR/XR Research Team has designed pharma-specific mixed-reality software for the HoloLens device that will offer our customers an easier, more immersive experience in the lab and suite.”

Apprentice’s XR/AR Research Team is currently experimenting with new menu design components for the HoloLens device that will reshape the future of XR user experiences, making it easier for them to interact with menus using just their fingers.

Apprentice’s “finger menu” feature allows users to trigger an action or step by ‘snapping’ together the thumb and individual fingers of the same hand. Each finger contains a different action button that can be triggered at any time during an operator’s workflow.

“Through our research, we’ve determined that the fingers are an ideal location for attaching AR buttons, because it allows users to trigger next steps without their arm or hand blocking the data they need,” Ozeratis added.  It’s quite literally technology at your fingertips.”

Why does the pharma industry want technology like this? Aside from the demand, there are situations where tools like voice commands are simply not feasible. The AR Research Team also learned that interactive finger menus feel more natural to users and can be mastered quickly. Life science teams are able to enhance training capabilities, improve execution reliability and expand the types of supporting devices they can apply within their various environments.

“Introducing these exciting and highly anticipated XR capabilities is just one stop on our roadmap,” Ozeratis adds. “There are bigger and bolder things ahead that we look forward to sharing as the pharma industry continues to demand more modern, intelligent technologies that improve efficiency and speed.”




Rokid displayed their AR glasses to AWE 2022

Liang Guan, General Manager at Rokid, enthusiastically stated:
“Numerous top-tech companies currently explore AR, XR, or the metaverse. As early as 2016, Rokid has been proactively expanding our AR product pipeline across leading technological areas of optics, chips, smart voice, and visual image. Today, we have X-Craft deployed in over 70 regions and Air Pro has been widely used in 60+ museums around the world. Moving forward, Rokid will keep delivering real value to enterprises through its line of AR products.”

Rokid products empower the frontline workforce, providing real-time analysis, views, and documents to the control center. Many media and participants were surprised after trying Rokid products. Saying that the various control modes provided by Rokid AR glasses are very convenient for users to operate and can effectively improve work efficiency.

Rokid X-Craft, demonstrated live at the AWE 2022, has officially received ATEX Zone 1 certification from TUV Rheinland Group. Becoming the world’s first explosion-proof, waterproof, dustproof, 5G, and GPS-supported XR device. This is not only a great advance in AR and 5G technology but also a breakthrough in AR explosion-proof applications in the industrial field. Many users at the event said after the trial that safety headsets are comfortable to wear and are highly competitive products in the market. It not only effectively ensures the safety of front-end staff, but also helps oil and gas fields increase production capacity.

Rokid Air Pro, a powerful binocular AR glasses, features voice control to help you enjoy a wide variety of media including games, movies, and augmented reality experiences. Rokid Glass 2, provided real-time analysis, views, and documents to the control center, and successfully improved traffic management and prevention to ensure the long- term stability of the city.

 

 




Contextere launches Madison, an insight engine for the frontline industrial workforce

Each day, millions of men and women in industrial organizations throughout the world spend over 30% of their workday on non-productive time (NPT) activities[i]. They are not idly wasting time, rather they are actively trying to find the right information, waiting for guidance, or attempting to coordinate with other work teams. While this condition exists throughout companies and across industries, it is particularly endemic within technical maintenance and operations activities.  Compounding the situation is that once these workers have the information they need, they are still likely to do the job incorrectly 25% of the time[ii]. This elevated human error rate (HER) results in costly rework as well as increased potential of catastrophic equipment failure and human injury.

The causes of high NPT and HER on the industrial frontline can be as varied as the companies that experience the problem. In most organizations, data is trapped in silos and contextual relevance across functional domains and activities is often lost. Information technology systems remain disconnected from operational technology systems which keeps critical insights from reaching workers on the last tactical mile. And despite massive corporate investment in data capture and analytics, the application of that information remains constrained to headquarters operations – enterprise efficiency and production optimization, and capital equipment investment planning. Frontline workers rarely have access to information that may be relevant to their own decision-making and activities.

Exacerbating the obstacles outlined above is a fundamental structural issue that continues to impact companies – a workforce skills gap. The adoption of new technologies and shifts in demographics have been radically transforming the way that organizations conduct business and the type of skills needed in their workforces. Accelerating workforce retirement and an overly long time to proficiency when onboarding new staff is resulting in the loss of tacit expert knowledge and a lack of skilled personnel. This skills gap worsens NPT and HER as smaller teams of more inexperienced workers must maintain, repair, and operate increasingly complex equipment with less knowledge and fewer resources available to them.

The Madison Insight Engine, recently launched by AREA member Contextere, is the first solution to combine data extraction, machine learning, and natural language understanding to provide insights and decision support to frontline technical workers maintaining, repairing, operating, and manufacturing complex equipment. This capability enables industrial workers to get the job done right the first time, develop their knowledge and skills on the job, and improve their productivity and safety.

In recognizing Contextere in its 2020 Cool Vendors for the Digital Workplace report[iii], Gartner noted that analytics and insights engines “typically focus on the needs of desk-based workers in large organizations,” whereas the Madison Insight Engine is unique in that it “uses context alone to proactively deliver all of the relevant information needed to complete a task” regardless of user location or domain.

Madison applies machine learning together with conversational natural language processing to deliver curated guidance proactively and predictively to a technician or analyst in an industrial setting based on their evolving local real-time context. The focus of Madison algorithms is to determine and deliver just the right piece of information – a reductionist approach to curating the vast amount of available enterprise data.

Industrial organizations across the globe are seeking to address productivity issues and a widening skills gap in their frontline workforces. By providing critical information proactively, when and where it is needed, the Madison Insight Engine enables each industrial worker to continuously grow their knowledge and competency on the job, perform their tasks safely, and be their productive best. In turn, companies receive the benefit of effective workforce development, maximum equipment uptime, and optimal human-machine performance. To learn more and see a demonstration of Madison, go here.

[i] Slaughter, A, Bean, G., & Mittal, A. (2015, August 14). Connected barrels: Transforming oil and gas strategies with the Internet of Things. Retrieved from http://dupress.com/articles/ internet-of-things-iot-in-oil-and-gas-industry/

[ii] Lyden, S. (2015). First-Time Fix Rate: Top 5 Field Service Power Metrics. Retrieved from https://www.servicemax.com/uk/fsd/2015/04/13/first-time-fix-rate-field-service-metrics-that-matter/

[iii] https://www.gartner.com/en/documents/3985043