The straightforward answer is there isn’t a daily use case which is centered around “productivity.” There are the near-invisible cases such as memo-writing based off of feedback from items which come to mind or recent conversations, but these don’t move the needle to what many have talked about using AI/ML for. Well, not for some of is desired for transformers to accomplish.

meme image showing box of shame with the text :All these folks talking about they are making the next great app with AI, and ain’t none turned up with a Google Reader replacement… for shame

There are some edge uses such as when looking to do comparisons or code changes/enhancements; where previously would go to several sites and piece things together. Now, sites such as Perplexity.ai are doing the work of putting those bits together. Using automations within applications is a frequent thing despite most other applications used are either the browser (usually Mobile Safari; Safari Technical Preview in the few macOS moments) or Muse. Even the data intensive uses in apps such as MindNode and AirTable cannot yet parse the existing content - or there is not an implemented model which is able to deal with the multiple content types and relational connectivity usually crafted. It has been more efficient to use an app/service’s existing IA structures, then let aggregation services (or even a play with GPT4All) handle offloaded snippets.

Hardware-Enabled Bits

So what about Humane’s AiPin? There’s got to be some productive bit happening there to justify that play, right? Well, yes and no. The aspects which would be better utilized for productivity moments are not yet implemented - specifically daisy-chained prompts which execute either spoken commentary or a memo/event/contact that is able to be acted upon later. Some of the more fanciful imaginations have the AiPin integrated with some of the content-holding services in use to be able to pull on those bits for various transformations. And the end of such imaginations has secure, spatial connections to my Muse corpus, where (a model am not yet familiar with) is able to pull on the text, images, links, ink, and connected content types to offer a conversation between myself and the AiPin which borders on the “second brain stuff many have been saying is already happening.

The comfort of the Brilliant Labs Frame (or also its newness) has led to it becoming a “daily carry.” There are some subtle, contemplative moments where these find some overlapping use with the AiPin - but the inability to connect to multiply devices leaves it tethered to a larger mobile not often carried. Some specific efforts to integrate with iPadOS and a few canvas apps could garner some positive results. A recent hackathon by Brilliant Labs points to some intriguing possibilities.

Am looking forward to the ML-enhanced integration coming to the Tap wireless keyboard. Seeing input methods challenged and improved through more than auto-correct models could be a huge enhancement for usability and non-linear inputs. However, when I think about one of the shapes of using Frame and the AiPin, am not sure that Tap is the best experience - yet. It is solid enough with the iPad Pro, and usable with the XReal Air & Beam combo - which is probably enough for now.

Hardware which listens and gives summaries is not used. Not clear enough on laws and security to proceed with this route. Other visual or touch-based hardware is either not available, our of budget, or not out of a research lab to consider.

The AI-hardware space is still quite new. Connecting to other devices, specially other wearables, seems to be a lagging shape for how these products anchor into the market. There’s some positive signs, but this isn’t yet a “standards make platform opportunities” kind of move. Too many of the nominal cases are the focus of major product features (translate, tell me about the news, etc.), instead of usage which follows niche or even harder-to-solve threads. Such is the challenge of hardware development - and it doesn’t seem that machine learning is helping to close that gap just yet either.

Small Wins

There are uses besides the code and content creation-types often put forward in ML/LLM demonstrations. Exploring how local content sources can be influenced by prebuilt data models (or an analysis of the content) has been one route. There are some positives and negative with this approach, starting with tuning models correctly to read the data formats properly. Nothing new is being learned from the outputs just yet. It’s all about putting the plumbing together before generating or transforming anything.

In the networking space, the AiPin has become something of a star for the “look at this business card and save it to my memos” demonstration. This action scans the business card which is in its view, and then saves it to the memos for later search and use. At the time of this writing, it is not able to save directly to contacts app - and would be nice to do so with contextual information such as date/time of connection, etc. What many of these demos show is the lack of approachable features and lack of speed between asking and outs. Still, contextual usage seems to endear a bit more promise and patience in tech-knowledgeable communities.

Another win has come in the space of simply explaining Avanceé’s position towards research and experimentation. Despite the negative and challenging media/noise about AI and ML, this pursuit of understanding has seemed to make a positive impression towards the space. We see something approaching-utilitarian regarding this which isn’t as often spoken or understood. To be able to crack the communication barrier, as well as the application one, seems to be a solid way forward alongside putting applicable products and services to use.

These are only small wins because the larger impacts just aren’t as possible yet. Part of this will be on waiting on some hardware or applications to be enhanced; others will be able to be extended based on what is developed or pulled from development streams. Close and open source approaches should keep the learning cup full, and the wins consistent enough.

Considerations Despite

This might make it sound as if “AI for productivity” is a failure, but it is not. There’s often-enough play with hardware and software which contorts itself to this AI/ML lens to see the probably of something utilitarian coming from this. Is it worth the cost (ethical, environmental, etc)? Depends on the day of the week. Is it worth the risk to avoid it? No. As with HTML/CSS in the late 90s, or MeeGo/Maemo in the late ‘00s, there’s actually a benefit to doing this - even if the short term wishes for expansion don’t quite happen as desired. Those cited and non-cited early plays are instructive here also. It gives space to see how ML is more utility than panacea. It demonstrates how artificial intelligence illuminates other fractures in information architecture, policy making/analysis, and regulatory processes. Those become the actual opportunity spaces - while models, schemas, implementations, regulations, and market participants sort themselves out.

Will the opportunity be enough to keep the imagination going? That depends on your posturing. If you are learning something, making sense out of what is happening, then yea, you’ll have energy for this for a while. If you are looking to make this into some kind of artful or legacy building effort - please be involved in infrastructure projects more than wrappers, transaction management, and even community building. If you are building hardware, don’t just throw a young idea out there - notice how the polish of consumer/prosumer market products have elevated UX expectations. Pay attention not only to the AI/ML feature, but also on-boarding, support, and even multi-language (including non-technical language) integration. You may not need an SDK at launch, but do want to think about what the product might look like as a platform for other applications. Approach some hardware as if it is a toy - and garner perspectives and research in that wise. Approach other bits as if it will address usability and accessibility needs of fractured or maturing populations.

For those who are researchers, experimental agents, and (probably) evangelists - your role is to be informed, honest, and consistent. Own your beliefs and your values as you watch companies and products rise and fall (Robert Scoble has made two lists of this on Twitter/X here and here. I’d say to not get too attached, but that’s nigh impossible. Instead, give grace to yourself when/if things don’t work out as you expected. And lastly, use the stuff - to a fault. Your value to both companies and later customers will be in your ability to describe how products and services align with the truth of imagination and use. Not simply with marketing. Pay attention to those products which had good marketing videos, but did not follow through with what the marketing video produced. Do you want to be an advocate with things that are able to actually happen. Not happening in the future, but are happening right now. And please pay attention to the voices and data highlighting the flaws in models, approaches, techniques, and practices (for example, this paper, this article, and folks like Abebe Birhane).

Avanceé will continue to investigate and experiment with some of the AI and ML tools where it makes ethical, fiscal, and technological sense. We will look at the companies that are investing rightly in the people and processes that should be elevated while paying attention to the infrastructure challenges. The regulatory challenges and ethical ones will not escape scrutiny as well. Because all of this is not healthy. All of this is not useful. But it is a perspective of use that is worth exploring. And when we are done exploring this edge, there will hopefully be a way forward that enhances humanity’s abilities, instead of diminishing it.