1

ARLU—the Right Event at the Right Time

EPRI is proud to collaborate with the AREA on the first ever Augmented Reality in Leading-Edge Utilities (ARLU) this July, where we will lead the industry to discern a disruptive technology and anticipate and solve issues through collaborative effort. In fact, ours is the only industry we know of where Augmented Reality as a disruptive innovation is being openly discussed. This isn’t going unnoticed.  Other industries are pointing at utilities and saying “Hey, look what they’re doing.”  Utilities are rarely perceived as having an active role in exciting new trends.

Three in One

The ARLU event is, in fact, three events in one.  First, it’s a meeting where EPRI and utilities industry representatives will present their Augmented Reality research and projects to vendors developing applications for the utility industry.  Vendors will see where utilities are placing emphasis in their development efforts and learn about the issues they‘re encountering.  Requirements such as size, weight and battery life of wearable technologies will be explored through the presentations, and will impart to participants a deeper understanding of the issues facing introduction of Augmented Reality in utilities.

Next, vendors will present their latest technologies for immediate feedback from industry experts. Not all technologies fit every utility situation and discussions around fit for purpose of presented technologies will be lively and informative. Finally, a workshop on gaps in existing standards will bring multiple perspectives to the problems of creating safe, comfortable and interoperable AR experiences in the utility space. 

Thought Leaders

Having subject matter experts together in one room is the one of the key objectives of this meeting. As we’ve been preparing the ARLU event, we’ve invited some of the brightest people in the utilities and utilities software industry to mix with thought leaders in Augmented Reality. We expect that the impact will last much longer than the two days in July because new ideas will emerge in the weeks and months that follow as the participants who meet in Charlotte continue to develop relationships.

We expect to capture some of the ideas these thought leaders generate and to share the outcomes of discussions with the broader community so that many others can also benefit.

Time is Right

We feel this is the right time for such a conference. Today, judging a technology for what it can do right now is the wrong way to look at it.  Advances occur almost daily and it’s better to first define what’s needed to build a future state of the technology. That’s where Augmented Reality is today. Practical applications are just now being introduced but an explosion of functionality is coming. By the time the average person notices the ubiquity of Augmented Reality, many of the issues we are going to discuss in Charlotte will already have been settled.

Wearable technologies with Augmented Reality are at a stage where real utility applications are possible. At the same time, shifting demographics at utilities are bringing in younger, less experienced workers—as older, more practiced workers are leaving. There needs to be an orchestrated “changing of the guard” where institutional knowledge, gained by years of hard work and experience, is transferred to a younger, more tech-savvy generation. The technologies presented at ARLU will deliver remote expertise and put information at the fingertips of crews composed of less seasoned individuals.

The wise man says it’s better to act on a lightning flash than wait to hear the thunder. That’s why we planned this event in 2015 and look forward to seeing many of the readers of this blog at the first ARLU event.




Augmented Reality Industry Leader: Bob Meads, CEO iQagent

Today Christine Perey, Executive Director of the AREA, interviews Bob Meads, CEO of iQagent and member of the AREA board. Bob is pioneering the use of mobile Augmented Reality on the plant floor to increase worker efficiency and safety.

Q. What is the level of interest in enterprise AR among people in your company?

The level of interest in this technology is high; however, we don’t like to put technology first. As I have written about previously, AR is a great fit for plant floor challenges. But using AR (or any technology) for its own sake is a flawed approach if you want to sell a product. We identify the problems we want to solve, and fit the best technology to solve them elegantly. The litmus test of a great AR solution is at first you don’t notice it’s an AR solution. Your attention is captured by the system’s usefulness and applicability to the problem it addresses. The realization that it uses AR comes as an afterthought.

Q. How does your company, group or team plan to make an impact in enterprise Augmented Reality?

We plan to bring to the enterprise market mobile apps that solve real problems, in keeping with our “practical” approach to Augmented Reality.

Q. In your opinion, what are the greatest obstacles to the introduction of AR in enterprise?

The three barriers we encounter most frequently are in adequate infrastructure, security issues and resistance to new technology. Using AR technology as part of a plant solution will overwhelmingly be issued on mobile devices. So the barriers to using mobile devices become barriers to using AR on the plant floor. It can be a big investment for a plant to create a wireless infrastructure that covers the plant floor well. Many plants also haven’t fully embraced the use of electronic documents versus paper ones, despite the obvious benefits. Mobile devices also tend to raise alarm bells with IT for many reasons. Then there is concern over ROI, that once the infrastructure is added, these new mobile devices and software will not actually be used or won’t provide a return on investment.

Q. Are you focused on a particular industry? If so please describe it, as well as the customers with whom you work.

While we serve most industries, automotive, chemical/pharmaceutical and food & beverage are where we focus. This is because these plants have lots of automation, and, therefore, lots of data and resources that the plant staff access on a daily basis. The ROI of our product, iQagent, is very dramatic for these kinds of plants.

Q. How do you characterize the current stage in enterprise AR adoption? What will it take to go to the next stage or to accelerate adoption?

In my opinion, AR technologies are still in the trough of the Gartner Hype Cycle, but slowly coming. The potential for enterprise AR concept to help workers visualize data and resources as they relate to real world equipment or processes in enormous. It limits the skillsets needed to perform adjustments or repairs, reduces human error, and lessens the need for training. It’s a giant win-win. So why isn’t it already in widespread use? Because AR solutions tend to be highly customized and developed for specific customers. This approach is expensive, introduces risk and extends the ROI for the customer. This is due, in part, to the lack of standards. The breakthrough for AR in the enterprise will come when there are more off-the-shelf AR solutions that are easy to integrate and deploy and provide obvious benefits and immediate ROI. Right now most AR products are toolkits because there are no AR standards out there. If standards were created and adopted, it would be easier for AR providers to create off-the-shelf solutions. This in turn reduces risk, lowers cost and provides a well-defined ROI for the customer.

Q. We’d like some historical context for your current role. How did you get interested in or develop your role in enterprise Augmented Reality?

I have been in industrial automation software and integration for 20 years, and have always loved technology. iQuest, my automation company, specializes in using different technologies to solve plant floor problems. When the iPad was released, we began looking for ways to leverage it on the plant floor. We started with identifying common problems we could solve with a mobile app, and then developed iQagent and the concept of “practical” augmented reality, or, in the words of Ars Technica, “Just Enough AR.”

Image courtesy of IQagent

iQAgent offers support to Windows 8.1




DAQRI @ AWE 2015

This post was previously published on the DAQRI blog and posted here with permission.

As we head into Augmented World Expo 2015, we have seen this event grow and evolve alongside the industry. Within this last year, we’ve seen more mainstream conversations about Augmented Reality than ever before.  As a result of this increased focus, there is now more than ever, a need to support and encourage innovation in Augmented Reality and computer vision technologies.

This year, we are excited to be showcasing our products and to spotlight our recent acquisition of ARToolKit, the world’s most widely used augmented reality SDK.  By releasing ARToolKit professional SDKs under LGPL v3.0 for free use, DAQRI is committing its resources to the open source community in the hopes that (in the words of our founder, Brian Mullins), “we can kick off the next AR revolution and inspire a whole new generation to pick it up and make things that haven’t been imagined yet.”

On the exhibition floor, Ben Vaughan and Philip Lamb from ARToolworks will be available to discuss ARToolKit and DAQRI’s newly-created open source division that they are heading up. In addition, representatives from DAQRI will be demoing DAQRI 4D Studio and showcasing exciting technologies from Melon, our brain computer interface division.

DAQRI executives will also be presenting throughout the conference:

Monday, June 8:

  • 10:45 am – 11:30 am—DAQRI 4D Studio Tutorial
    Katherine Wiemelt, Sr. Product Director, DAQRI
  • 2:15pm – 3:00 pm—How to Measure Enterprise AR Impacts
    Andy Lowery, President, DAQRI

Tuesday, June 9:

  • 11:30 am – 1:00pm—Smart Glasses Introductions
    Matt Kammerait, VP Product, DAQRI
  • 2:00 pm – 3:00 pm—Entertainment, Games, and Play
    Brian Selzer, VP Business and Product Development, DAQRI
  • 7:00 pm – 8:00 pm—Auggie Awards
    Brian Mullins, Founder and CEO, DAQRI

Wednesday, June 10:

  • 2:45 pm-3:00 pm—From Evolution to Revolution: How AR will Transform Work, in the Future
    Brian Mullins, Founder and CEO, DAQRI



AR: A Natural Fit for Plant Floor Challenges

Much has been made recently of how Augmented Reality will soon merge our digital lives with our real ones, bestowing new powers to our social and working existence. Recent advances in technology have lulled us into believing that AR in the workplace is just around the corner. Many of us have looked forward to high-tech glasses, watches and other wearables finally delivering that promise, inspired by viral YouTube videos (here, and here ) showing workers wearing glasses with full field of vision AR displays. However, this has yet to materialize.

The recent withdrawal of Google Glass and the general failure of wearables to meet expectations have influenced public perception of enterprise AR as falling rapidly from Gartner, Inc.’s Peak of Inflated Expectations into the Trough of Disillusionment. 

AR Is a Natural Fit for Solving Plant Floor Challenges

Gartner has pigeonholed AR technology into the digital marketing niche. This is possibly the result of highly visible and successful AR brand engagement campaigns, such as for sports teams, automobile companies and even business-to-business marketing. The Augmented Reality feature provided in the IKEA catalog companion application demonstrates how AR can be useful as well as drive consumer brand engagement. These campaigns and useful applications primarily address the outward-facing communication needs of the brands and are measured in terms of greater sales or customer loyalty.

Turning towards business operations, those of us involved in the manufacturing and automation field see AR as a way to address many plant floor challenges. Here are a few examples of common plant floor issues, which we believe are a natural fit for enhancement with mobile AR.

Plant Floor Problem

How AR Helps

1.     When following a procedure, workers often spend time trying to identify the part of the machine or adjustment point that requires their attention.

Visually identify and direct workers to the specific part or adjustment port that requires their attention.

2.     Workers performing an unfamiliar or infrequent task spend time searching in manuals for procedures that match the task or asking for help from co-workers.

Provide contextual visual instructions to show workers how to correctly perform unfamiliar tasks. 

3.     Workers spend time searching for data and resources that uniquely identify the equipment on which they are working.

Identify equipment or processes and visually display relevant data and resources.

4.     Technical resources required to evaluate and efficiently respond to unplanned downtime events are not available in real time.

Provide visual communication tools to provide users and remote resources with a common, real time or “snap shot” view of the equipment or process.

Table: Potential AR Solutions to common plant floor problems

It’s very tempting for an engineering team to develop an eye-catching AR application for a demonstration and to suggest that the technology also easily addresses more complex problems. These solutions are usually implemented by experts using software toolkits, rather than implementing commercial off-the-shelf software. The final implementations delivered for the customer are usually highly customized. In these cases, ROI is difficult to define. iQagent’s approach to solving plant floor problems with AR involves first focusing on the problems to be solved, and then defining a good mobile AR solution to address the challenge.

Interventions are Collaborative Endeavors

One challenge we address is #4 from the table above: technical resources required to evaluate and efficiently respond to unplanned downtime events are not available in real time.

Production downtime costs are often measured in hundreds or thousands of dollars per minute. When a production line goes down, the operator must communicate with remote technical resources in order to get production running again quickly. One factor preventing effective communication is the education gap between the operator and engineer; operators aren’t engineers, and engineers aren’t used to operating the equipment through all phases of the process. Each has specialized technical and procedural knowledge that can contribute to a solution, but traditional channels such as phone, text or e-mail aren’t perfect tools for collaboration. The operator must decide which details are important to convey to the engineer, and the engineer must find the right questions to ask in order to get a clear picture of the problem. Due to the prohibitive cost of production downtime, this effort has a very small window in which to be effective. At some point, the decision must be made to get the human resource on-site in order to return the line to normal production.

We then considered why engineers and operators are more efficient in resolving production downtime issues when collaborating in person. The operator can directly show the problem to the engineer, referring to live process values and performance indicators relevant to the process from the local automation system. The engineer can analyze the problem in real time, asking the right questions of the operator in order to resolve the problem.

A successful mobile solution duplicates the benefits of in-person collaboration, allowing each participant to effectively contribute their specialized knowledge to a common view of the process, including live data and operational details from the automation systems that are relevant to the problem.

This particular solution is a great fit for AR-enhanced software on a mobile device.

Augmented Reality with iQagent

iQagent uses the device’s video camera to identify a unique piece of equipment by scanning a QR code. The software overlays relevant process values and associated data on the camera’s displayed video feed, which can also be recorded as a snapshot or video. This provides a common view of the process required. Operators can also annotate directly on the images or video, making notes and drawing attention to areas of interest for the engineer to analyze, in effect “showing” the problem. When finished, the user saves and e-mails the video to the remote technician, who now has a much more complete picture of the problem, and in many cases, can resolve the issues more efficiently.

We feel iQagent is a great solution to some common plant floor challenges. But having a great product isn’t an end but a beginning. To make any product a success, you have to get it in front of users who need it, and you must support and continually improve the product. This is why we joined the AR for Enterprise Alliance. The AREA enables us to collaborate with other like-minded AR solution providers, end users and customers. Through education, research and collaboration, we will help to move AR out of the Trough of Disillusionment, up the Slope of Enlightenment and onto the Plateau of Productivity.




Why Augmented Reality and Collaboration Make for a Safer and Better World

Augmented Reality (AR)-enabled systems show a mechanic how to repair an engine, or perhaps in the future will guide an inexperienced surgeon in a delicate heart operation. In my opinion, it’s when AR is combined with human collaboration that the magic begins. AR will soon work its way into a variety of applications that are bound to improve our lives, but more importantly, I am convinced it’s to become a catalyst for greater human understanding and world peace.

Augmented Reality Can Bring Us Closer

Everyone’s heart raced when Jake Sculley, the wheel chair-bound Marine in the movie Avatar, first connected his thoughts to those of his avatar, walked and then ran. His mission was to infiltrate the society of the natives, learn their customs and, having gathered that information help destroy their world. Of course, we all know how the story ends…It’s difficult to do harm to those we know. The first step in Hitler’s campaign to eliminate those he considered unworthy was to convince his followers that the others were less than human. In fact, this is a universal technique involved in incitement to violence against another group. It is only when we finally get to know someone that, even if we don’t agree, we can begin to understand and care about them.

Sharing Experiences

AR allows a user to see an enhanced view of reality, placing graphic images and 3D models over the real background. This will be great for building and repairing things by ourselves, but when we combine that capability with modern telecommunications, remote users will be able to participate in those processes with local users in real time, and appear to the wearer of the glasses as if standing alongside them. We won’t just see our grandkids in a Skype screen; we will take them with us on new adventures around the world or in our backyard. An astronaut in space will literally see the hand of the equipment specialist on earth pointing to the board to be replaced as they speak.

Gutenberg changed the world because the printed page could easily display the manuals that apprentices used for learning the trades that freed them from the fields. Radio and then television added sound, motion and recently 3D to the flood of information. Telecommunications has brought the cost of distributing it to practically zero. Now AR combines these capabilities and creates an infinite number of parallel worlds that you may create and visit, as well as acquire skills in from one-on-one instruction. It’s the closest thing to teleportation this side of Star Trek.

Non-verbal communication is said to account for between 55 and 97% (depending on the study) of communication between people. AR will provide practically the same information due to its enabling of “belly to belly” proximity. You will be able to virtually sit in a conference room and interact with other remote participants, watch a theater performance in your living room or tag along with a friend on an exotic trip to a foreign land. That friend will be able to see you, too.

New Ways of Displaying Information

Talk about disruptive. This is downright neutron bomb material. Why do you need a laptop or tablet when you see the screen suspended in mid-air, with the glasses projecting a keyboard on any surface? Gone are large-screen TVs, when everyone sat stationary watching the game from the same angle. Why wouldn’t they prefer it in perfect 3D? Forget glass cockpits in airplanes; why not have all the instruments projected in your field of view? How about infrared images of deer or pedestrians in fog or at night shown on the windshield of your car, to avoid hitting them in time?

Augmented Reality and Collaboration

But, again collaboration use cases will take the cake. The level of empathetic bonding that occurs when you’re in the room with another person will make current social messaging seem like sending smoke signals. Professionals in other countries will virtually know you and work together on projects as I am proposing using the Talent Swarm platform. Along with such proximity-enabled work will come a better understanding of other countries and cultures.

Collaboration is key, but it can’t happen at scale if everyone needs to buy and use exactly the same hardware and software. Collaboration across networks and companies as diverse as the places where humans live and work builds upon deep interoperability. Interoperability with existing and future systems will require a globally agreed-upon set of open standards. We will work within the AREA to strongly advocate for interoperable systems and push for global standards together with other AREA members. Once we have collaborative AR platforms, the benefits of this technology will rapidly serve all people of the world. Becoming an AREA founding sponsor member is, for Talent Swarm, not only common sense, but putting a stake in the ground, demonstrating our leadership for a more productive and peaceful world. We will avoid embarking on another wasteful battle such as VHS vs. Beta, nor allow a single company to reduce the opportunities or lock others out. Christine Perey, Executive Director of AREA, refers to it as our mandate: to ensure that an ecosystem of AR component and solution providers is in harmony with the customers’ needs, and able to deliver the diversity and innovation upon which economic success is based.

Path to the Future

With a concerted group goal centered on the advancement of AR, and with many technological developments both in the works and being introduced at an increasingly fast pace, we will one day look back to 2015 and say, how did we ever get along without Augmented Reality?




Augmented Reality at CES 2015 is Better and Bigger Than Ever

There’s something for everyone at CES. Do you need a way to store your earbuds so the cables aren’t tangled? What about printing icing on a cake?

Roger Kay, a technology analyst who writes for Forbes, recommends breaking up the event into ten parts. It’s not about the horrendous taxi lines or other logistical issues of dealing with so many people in a relatively small area. I walk everywhere I go. I leisurely covered twenty-four miles on the flat Las Vegas ground in four days; there are buses to and from the airport. Kay wants his topics served out in concentrated exhibition floor zones.

Like for Kay, many of CES’ themes lie outside my areas of interest and despite the headaches caused by the crowds, having the option to see and sample the developments in a variety of fields is one of the reasons I return each year.

Finding what I need to see isn’t a matter I treat lightly. A month before heading to Las Vegas I begin planning my assault because the CEA’s web site is horrendously inefficient and their new mobile app pathetic. Using brute force, I locate all the providers of head-mounted personal displays, the providers of hardware that is or could be AR enabling, and the “pure” AR firms with whom I already have relationships. I also plan a long, slow visit through the innovation zones, such as Eureka Park. I know another half day will be dedicated to Intel, Samsung, Sony, LG Electronics and Qualcomm. Then I search for outliers by name.

A few days prior to the event I begin following the news feeds on social media and technology trade blogs. While there, I also scan the headlines for surprises. 

Highlights of my CES 2015

For reasons that don’t have to do with Google Glass, vendors are making good progress in the personal display space.  The first reason is that more companies are experimenting with new combinations of familiar technology components, particularly with hardware. Optinvent is recombining their optical technology with a set of headphones. Seebright is adding a remote control to your smartphone. Technical Illusions is combining reflector technology and projectors with new optics. It’s like gene mixing to produce new capabilities and life forms.

Vuzix demonstrated the new waveguide technology in their optical see-through personal displays for Augmented Reality.

Vuzix demonstrated the new waveguide technology in their optical see-through personal displays for Augmented Reality.

That’s not to say that designs for the “traditional” optical see-through display form factor are standing still. Getting new investments, such as Vuzix received from Intel, is a major accelerator. ODG’s sales of patents to Microsoft in 2014 produced sufficient revenues for the company to develop a new model of their device targeting consumers.

The second reason for the significant advances in the personal display product category is the evolution of components. I saw firsthand in many exhibits, the “familiar” components these displays are must include, such as motion and other sensors, eye tracking kits and optics. All are rapidly improving. For these components, “improving” means smaller size packaging and lower power consumption. 

It was good to focus—if only briefly—on the familiar faces of AREA members such as APX Labs and NGRAIN who were participating in the Epson developer ecosystem booth, and to see the latest Epson products, which seems to be increasingly popular in enterprise. I found APX again in the Sony SmartEyewear zone, where I was able to try on the Sony prototype. I also caught up with executives and saw impressive new AR demonstrations by companies whom I don’t often see attending my events. If you’re interested, I encourage you to click on these links to learn about MetaInfinityAR, Occipital, ScopeAR, Technical Illusions, LYTE, XOeye Technologies, FOVE, Jins Company, Elvision Technologies, Avegant  and Augumenta. I’m sorry if I neglected to include others that I saw at CES.

Although they were around and showing AR or AR-enabling technologies, and we may have crossed paths unknowingly, I didn’t have a chance to meet with Metaio, Lumus, Lemoptix or Leap Motion.

I spent more time than expected visiting and observing the booths of Virtual Reality headset providers who were at CES. There were several exhibition zones dedicated to Oculus VR, with the new Cresent Bay device.  The lines waiting to try on the new Razer OSVR (Open Source VR) system were stunningly long. It amazes me that a small company like Sulon could afford such a huge footprint in South Hall to set up private briefing rooms for its Cortex display for AR and VR, and yet exhibit openly outside.

Elsewhere there were hordes swarming at the Samsung Gear VR and the Sony Project Morpheus zones. What good are all these headsets without content? I stopped in at JauntVR, which seems to be getting a lot of attention these days. I’m sure there were dozens more showing VR development software, but VR is peripheral to my focus.

I was impressed by the NVIDIA booth’s focus on Advanced Driver Assistance Systems this year, demonstrating real time processing of six video feeds simultaneously on the Tegra K1 Visual Computing Module. There were also excellent demonstrations of enterprise use of AR in the Hewlett Packard exhibit. Intel dedicated a very significant portion of its footprint to Real Sense. And, similarly, the Vuforia zone in Qualcomm’s booth has expanded by comparison to 2014. The IEEE Standards Association offered an AR demonstration to engage people about their work.

Automotive companies were also showing Augmented Reality. I saw examples in the BMW pavilion, in Daimler’s area, the Bosch booth, and Hyundai’s prototype cars.

At the other end of the spectrum there were many exciting new products in the pico projector category. MicroVision and Celluon were both showing HD pico projectors for use with smartphones; such technology will certainly be considered for projection AR in enterprise. ZTE and Texas Instruments also introduced their latest pico projector models at CES 2015.

Digging in Deeper

Although no longer in Las Vegas and despite my careful advance planning, I continued with my CES homework for at least a week. For example, I watched the archive of the “New Realities” panel and played back other videos that cover AR and VR at CES on CNET, Engadget, Tested and Financial Times

The IEEE published an analysis of AR at CES in Spectrum that reaches the same conclusion I drew:  the “C” in CES is for Consumer but a lot of consumer technology is going into corporate IT.

I hope I will have digested all that I gathered at CES 2015 before I begin preparations for 2016.




Terminology Monsters Alive and Well

Most enterprise IT managers use dozens of acronyms and a large specialized vocabulary for communicating about their projects. The mobile industry is particularly rich with layers of terminology. Last year mobile IT professionals were studying and defining policies for BYOD. Now wearable technology is at the top of every mobile technology portal.  

Confusion in Communication

Ironically, Augmented Reality promises to deliver improved communication to the user but is plagued with a potential for confusion in terminology. The glossaries—yes, there are several—have nearly 40 frequently misused terms (each) with only a few overlapping terms. An analysis of the differences between the AR Community Glossary v 2.2 and the glossary in the Mixed and Augmented Reality Reference Model has been performed by Greg Babb, the AREA editor. This analysis will be discussed with experts during the virtual meeting of the AR Community Glossary Task Force on November 24, 2014.

Who Needs a Glossary?

Simply refer to Milgram’s Continuum. There is a virtual world and a real world. The space between these two extremes is “Mixed Reality.”

B-13-11-14-Terminology

It sounds and looks like a simple concept but debate about the segments within Mixed Reality can consume entire meetings.

BP-13-11-14-Terminology2

Is the 1st & 10  in football Augmented Reality? No, it isn’t according to the debate among experts of the AR Community. And when the details of Mixed Reality need to be spelled out and implemented in a distributed computing architecture by many different people, the definitions are insufficient and the concepts blend together. This is an impediment to enterprise AR introduction and adoption.

Diminished Potential

Speaking of blending together, in early November, Hewlett Packard announced its spectacular plans for 2015 as bringing “Blended Reality” to new personal computing products. The sprout PC replaces a keyboard and mouse with touchscreen, scanner and other features that let users take actual objects and easily “put” them into a PC.

Seeing a connection with Augmented Reality, the author of this Business Insider article tried to define Virtual and Augmented Reality. “That’s what you get when you put on Google Glass and it projects Google-y facts or images on the world. Or you run an app like Star Chart on your smartphone, hold it up to the sky and it superimposes the constellations on your view of the sky,” wrote Julie Bort to hundreds of thousands of readers.

Forget the fact that Google Glass does not really provide Augmented Reality and ask the executive who is running a multi-billion dollar business if they want an app to project constellations on their warehouse or factory ceiling. Augmented Reality’s potential is not only unclear; it actually gets diminished by comparisons of this nature (nevertheless, let’s not confuse this with “diminished reality,” OK?).

The fact that HP is beginning to pay attention to Blended Reality, Mixed Reality or Augmented Reality should not come as a surprise, given the integration of the Aurasma group into the company and the variety of services that could be provided on HP servers for managing and delivering AR experiences. But the new sprout PC looks awfully similar in some ways to demonstrations of Intel’s Real Sense. If these similarities are deep, then perhaps it is time for Intel and HP to invest in educating their target audiences about these new technologies. And a consistent vocabulary would come in handy.

To make sure that people do not jump to the conclusion that Blended Reality is something invented in 2014 by HP, the Business Insider article points out that Blended Reality was first introduced in 2009 by the esteemed Institute for the Future (IFTF). “The IFTF envisioned it as a sort of tech-enabled sixth sense, which will be worn or maybe even implanted into our bodies and interface with our computers,” concludes the Business Insider piece.

If that is how HP is using the term, there are even bigger problems than the definition of Augmented Reality terminology.

Mixed and Augmented Reality Reference Model

One of the solutions for this obstacle to Augmented Reality deployment is the Mixed and Augmented Reality Reference Model. The candidate specification is available for review and will be voted on within ISO/IEC JTC1 SC 29 WG 11 (MPEG) in 2015.

To learn more about the Mixed and Augmented Reality Reference Model, visit this AREA blog post.




Future for Eyewear is Bright (If Enterprise Requirements Can be Met)

The topic of hands-free displays or eyewear for Augmented Reality was popular at InsideAR 2014. It was discussed using many names (e.g., smart glasses, eyewear and HMD, to mention a few) and shown at many of the exhibition stands. On the stage, there were no less than six presentations focused entirely on hands-free displays. Even those speakers not focused on displays mentioned the opportunities they would offer once customer requirements were met.

New Report Forecasts Four Waves

During his InsideAR remarks, Ori Inbar of AugmenedReality.org described the latest addition to an already significant body of market research on the topic of hands-free AR displays. As Ori mentioned in his preface, the other reports to date do not agree on the size of the market, the terminology or how to seize the opportunities they will offer. It is not clear if or how this report compares or addresses the uncertainty.

Entitled simply “Smart Glasses Report,” Ori’s new report compiles findings from tests and interviews conducted with companies providing products and components. The scope does not include devices designed for use with Virtual Reality content.

The purpose of the report is to answer two burning questions: Will smart glasses ever come into their own? And when will this happen? To the first question, the answer is that those contributing to the report felt it was inevitable. As to the second question, it depends.

FutureBright-glasses-1

Ori predicts there will be four waves of technology:

  1. Technology enthusiasts: 10 models of glasses will sell 1 million units within the next year or two
  2. Initial winners will emerge and sell a total of 10 million units by 2016
  3. The early majority market will be composed of fewer competitors and will sell 50-100 million and reach critical mass by 2018
  4. Mainstream adoption will occur between 2019 and 2023 with the shipment of one billion units

FutureBright-glasses-2

Business opportunities will depend on the wave and type of company. Ori predicts that by 2020, there will be one 800 pound gorilla and a few challengers. He also predicts that prior to, and even during 2016, most sales of eyewear will be for enterprise use.

That depends on those devices meeting the requirements of the enterprise customers.

Enterprise Requirements for Head-mounted Displays

In his InsideAR 2014 speech, Dr. Werner Schreiber of Volkswagen provided a very detailed analysis of the requirements that head-mounted display vendors need to meet if they are to achieve traction in enterprise. To set the stage for his analysis, Schreiber began by saying that AR is not going to be popular in production processes until head-mounted displays meet certain requirements. In other words, the use of tablets and smartphones is just not convenient when the tasks people are doing require both hands.

One of the most important requirements described (in fact the first of at least 10) is power consumption. The devices will need to begin with a battery life of at least 10 hours. Another requirement was field of view (FOV). In Schreiber’s analysis, the FOV must be at least 130 degrees, or a moving FOV that is 50 degrees.

Of course, support for corrective lenses is not negotiable nor are systems that minimize wiring. If there must be wiring, it needs to include easy connectors both at the display and other power or computing devices that may be needed.

If the hardware can be designed to meet the minimum requirements, there remain significant software challenges. Schreiber’s ideal platform would permit automatic:

  • Creation of computer data
  • Removal of unused details
  • Creation of workflow
  • Consideration of possible collisions
  • Selection of necessary application tools and materials
  • Publishing of user-generated annotations into the experience

That is a lot of requirements to meet before 2016.

Do you have a product or strategy that will address these needs? Let us know about your product or opinions on these requirements.




Reality Creeps into VR

Virtual and Augmented Reality professionals are increasingly finding their projects converging. Augmented Reality projects can overlay 3D data prepared for VR environments onto the viewed physical world. Virtual Reality specialists are discovering that their skills and tools can be applied to many more use cases when they use the real world as the source for models and as the scene for new experiences.

More Realism

The convergence of AR and VR is the result of several trends. The first is the introduction of commercially ready RGB-D devices. The 3D models generated from RGB-D systems can provide 3D objects for VR. In “The Quest to put more reality into Virtual Reality,” an article published in the MIT Technology Review, Philip Rosendale, founder of Linden Labs and the visionary behind Second Life, describes how using the latest systems to “capture” reality can reduce time and costs that used to be required to recreate reality using 3D graphic tools. High Fidelity, Rosendale’s latest startup, is using depth cameras and advanced facial detection and tracking algorithms to simulate the expressions of people on the faces of their avatars.

Another trend that contributes to the bleed over between AR and VR is the re-use of digital assets. Models created for VR and simulation are increasingly useful for recognizing real world objects, especially low-texture objects such as those made from glass and steel. These materials are highly reflective so the surface properties can trick recognition and tracking algorithms. By using the contours and edges of the model as the unique signature and comparing them with the real world properties of an object, Augmented Reality recognition systems are more efficient, less likely to have errors and to need calibration.

Moving About

Another reason that Virtual Reality professionals are increasingly interested in AR is the need for users to have assistive technologies when they are performing tasks in the physical world. With VR in a cave or using a Powerwall, users must stay in a small, confined area. Even with VR goggles, such as Oculus Rift, a user either sits or stands with limited mobility since cables connect the user to a computer and obstacles in the physical world are not dynamically introduced into the scene and can be dangerous.

By reusing procedures designed for simulation and training in Virtual Reality and adapting them to AR, the investments a company has made in high-quality digital assets have a potentially greater return. Conversely, new Augmented Reality projects may enter the test phase and reach performance objectives more quickly when their designers do not have to start from a “blank slate.”

Are you noticing these trends in your projects?

Join AREA members and others working at the convergence of AR and VR at the SAE AR/VR Symposium in Dearborn Michigan on November 18 and 19, 2014.

SAE-banner




It Is All About People

In his presentation on the InsideAR 2014 stage, AREA member Carl Byers of NGRAIN Corporation shared with the audience his conclusion that Augmented Reality is “all about people.” When in the middle of a technology-centric event taking place in the center of the densest AR-populated region of the world (Munich, Germany), it is important to reframe why all activities and investments matter: Augmented Reality helps people to see digital data in context.

The “all about people” guideline applies in medicine as well. Improving patient outcomes is at the heart of Dr. Kavin Andi’s research at St. George’s Hospital at the University of London.

Dr. Andi is an oral and maxillofacial surgery consultant who also practices microvascular reconstructive facial plastic surgery. In his InsideAR presentation Dr. Andi explained how Augmented Reality could provide value to:

  • Designing and communicating tumor removal and reconstructive processes
  • Detecting airway obstruction
  • Planning bone and tissue harvesting

The presentation also introduced some of the many tools surgeons use to achieve positive patient outcomes. Some tools are physical: scalpel, saw, clamps and hoses of many types. And others use software. In addition to the many credentials he has earned in his journey from molecular biology to reconstructive surgery, Dr. Andi has invested heavily in mastering the use of a dozen different software products in a graphics pipeline.

Beginning with scans of patient bodies, the processes he has defined permit surgeons to better prepare for surgery. By studying and planning procedures to be performed in the operating theater in minute detail in advance, the time spent in the actual theater is reduced. 

Patient scanning is highly sophisticated, involving measurements of tumor and bone density, blood flow and other critical factors. But as in other fields where big data is accessible to the expert, the data needs to be accompanied with analytical tools, and in the case of surgery with real time visualization.

The first gap Dr. Andi needs technology to address is advanced segmentation. Better segmentation of the data will separate the volume of the tumor from the surrounding area that is affected. Then, with this data superimposed on the patient, Augmented Reality can help surgeons and assistants—and ultimately the patient—to visualize the proposed treatment.

Leaving diseased tissue in the patient, or removing too much tissue due to low accuracy visualization can impact patient outcomes. That is why surgeons also need (for registration and tracking with hands-free displays) to have sub-millimeter accuracy on deforming surfaces.

When this can be achieved, Dr. Andi envisages that he and other surgeons will be able to perform complex procedures with 3D digital overlay on the patient instead of constantly referring to a display on the side.

To learn more, watch Dr. Andi’s InsideAR 2014 presentation below.