Proff. Jose Quenum
At its Worldwide Developer Conference in 2023, Apple finally entered the extended reality (XR, a combination of augmented reality (AR) and virtual reality (VR)) competition by launching its Vision Pro, a mixed-reality headset. So far, Apple has received mixed reviews, especially due to its stiff product price.
However, most WWDC attendees who got to play around with the device are impressed by its high quality (the material it is made of, the technology it is built on).
There were also reservations or questions about other design decisions of Vision Pro (the battery, the power cord).
While I do not expect Apple to disclose the technologies involved in the product itself unless an institution enters a partnership with Apple to conduct research, I am confident that the involvement of the developer community will drastically change the landscape. More and more useful applications will pop up and will help make the case regarding whether it is worth dropping USD3 499 (approximately N$64 000) for such a product. Eventually, whether the price will drop (as with other competitors, like Meta Quest Pro) or lower-end variants will be offered, just as we have seen with the iPhone and the iWatch, is up in the air.
The challenge for a research group with a limited budget undertaking research in the field is almost similar to acquiring a high-end server for advanced computation. It comes down to setting one’s priorities.
Unfortunately, XR devices are less shareable than providing access to a server for computation or storage. Maybe the alternative is for them to team up with manufacturers to design and build lower-end bare devices, on top of which they deploy the outcome of their research.
We have seen this approach in the past with lower-end laptops. What type of research can one conduct with such devices? There has been a vast amount of research outcomes in the field of XR. They include entertainment, marketing, etc. In education, XR has been used to enhance the teaching of mathematics, biology and chemistry. Here, I would like to make a case for software engineering. The overall purpose of a software engineer is to learn how to get project ideas to materialise into software.
While building a piece of software, an engineer needs to optimise his/her algorithms and how the data (information) is represented. Technically, we refer to this as data structures and algorithms. Teaching such a course has usually been done either in an abstract manner, or by the use of programming language. In my view, both approaches are flawed.
The first one is too abstract for learners to grasp the effects in memory. The second one distracts learners with the syntactic noise of a programming language. What is essential here is to emphasise the interplay between the steps of an algorithm and the transformation of memory locations allocated for those algorithms or the overall programme.
Of course, there exist visualisation tools for algorithms. Nevertheless, those tools are too basic in their current form and do not capture the required complexity, especially for advanced data structures (e.g. balanced trees, probabilistic data structures (quotient filter)). I believe that XR can play an important role in addressing that issue. It can help create a virtual environment with the appropriate animation which captures that interplay, with the possibility of the user guiding the animation through voice, gesture and eye movement.
I look forward to more technology in our classroom, including XR technology.
*Professor Jose Quenum is an academic in the Software Engineering Department at the Namibia University of Science and Technology (Nust).
The opinions expressed in this piece are his own, and not the views of his employer.