A part of the work posted during some of the weekly “notable reads“ lives in the attention paid to augmented/artificial intelligence (AI) and machine learning (ML). Much of this is because this is a popular topic, and there’s much worth investigating behind the hype and ”transformative actions” at play. Some has become a regular dive into data structures, language models, and model implementations. We purchased a Mac Mini a few years ago to explore this space with more depth - a prototypical research and development experiment. We are using the Humane AiPin and Brilliant Labs Frame as a “productivity and communication“ computers alongside an iPad Pro. Taking the macros and automations done already (Apple Shortcuts, Zapier, IFTTT, Microsoft Power Apps, etc), or refined, to understand how these items are growing to a wider populous in positive, negative, ethical, and economic contexts.

Defining the Complications

Most of the currently applied AI/ML space is not all that complicated (though impressive and frightening for many). Some believe the systems, or the data models used to determine/create outputs offer some level of intelligence not previously knowable. Our posture is that they are not necessarily elevating anything unknown (it’s calculus with a touch of social engineering), but are getting to the outputs of those transformations and calculations faster than had been done previously - or are done more efficiently than processes allow for various teams/orgs. Data science augmented by the power of a machine to do the non-interpretative parts of productivity. And for many, a lot of this activity simply boils down to “do you know how to make an outline to recognize the gap(s).“ Coincidentally, this falls near our observations of sensemaking frameworks.

An outline? Yes. A visual/tactile, logical map starting with what you know. The behaviors of creating one illuminates what you do not know (aka, gaps), and focuses your energies towards the avenues to address them. Bonus feature of many outlines comes in its ability to frame one’s narrative to the/an audience, making it easier to gain consensus or influence a decision. One excellent mapping method which blends well into data-driven analysis is Wardley Maps.

Journey to Re-engineer Complexity

The work that we have done alongside, others has simply been to figure out how large bodies of data can be organized or constructed to ignite focus. Not the way we hear machine learning is appropriated use. But, we see augmented intelligence following this pattern would be most sustainable if it followed this model.

What rises to the top within these journeys? Questions such as these have been asked around human-computer interfaces (HCI) for the entirety of the topic:

  • who is the person that is doing the acting;
  • what decision is being acted upon;
  • what validations are going to be present; and,
  • what do we determine to throw away based on those validations or the context of the actor in that decision.

Answering puts us into a space of seeing machine learning (and artificial intelligence) as the tooling for ways of working not yet been dreamt of, or were previously costly to implement. We sit just as much in the seat of a craftsman, as we would a strategist and validator.

Use Case: ML to Correct User Misconceptions

In one past experience, we designed a series of questions around accustomed data set, and “built from scratch“ language model in order to determine the inadequacy of user documentation. Yes, we could’ve simply answered this with a search query, and some carefully understood logs. However, we emphasized the inefficiency of how they transmitted information to others. Designing the documentation around the user, not around the business owner administrator, so that issues with supporting the application or business processes could fall into measurable containers. The language model was built to look at a dataset and test a few notable queries. Through the response(s), we could view the contextual knowledge the business owner administrator thought was present. What was really neat is that the developer we worked alongside was able to show a numerical confidence index alongside the prompt and the output. This gave additional context to the inability of that documentation to prevent support issues in the future.

In this case, the mapping/outline was for that of those people doing investigations into support issues. The journey explored why support issues landed into solved, unsolved, and nebulous segments. We isolated the audience (in this case it was development team). We isolated the decision matrix that presented itself (performance with a support team). And then leaned into an assumption that humans have, but a machine cannot have (“the information is in the documentation”).

The creation of this tool (a co-worker created a data model/schema from scratch) and its resulting output put us in a position where we could make better decisions about feature development and previously unknown support gaps. We found the (human) assumptions of what the documentation stated were false. The comprehension levels of those who needed to rely on the documentation was out of sync with their retrieval-abilities as needed and measured by the support processes. The machine learning agent developed exposed those assumptions, and assigned a numerical value to the confidence of each generated answer. This exposed the gap, and realigned expectations, content, and (later, refactored) processes.

Concluding Thoughts

Nothing about that exercise or some others is terribly complex. We started with an outline/map, and we built our way towards desired and undesired outcomes. Machine learning gets us there in ways that we might not have readily imagined, but it does not create new intelligence. You still need to have subject matter expertise, the ability to validate the model and it’s outputs, and also the ability to create, great context/questions for your data set. Without designing your environment in this way, the type of intelligence you will observe may be an hallucination. If you trust the wrong things, your hallucination might taste OK, but you’ll end up putting yourselves (and your customers) in a dangerous position.