screenshot of Avanceé Reads 2024 boards in Muse

There’s something of a running joke in tech-corporate settings where there’s this bastardizing of the terminology of a new tool or method which breaks forth. It’s really bad, and wouldn’t be as funny if it weren’t true. Now, there are several reasons for this, part of it having to do with the way we want to signal to people who determine our reputation. The other part has to do with how mimicry happens before adoption. Such is a sociological condition, and artificial intelligence (AI) is not immune to this.

Some folks will go the route of asking what kinds of tools you are using. Then, showing the automations, macros, and/or scripting which define their particular shard of understanding. This is the portion of the curiosity phase where an item has moved out of the personal playspace, and someone feels comfortable enough with it that it becomes a marker they use with others (again, signaling).

Some folks go the route of adopting language and methods as they were taught. They might state or show some evidence of a certification (personal feeling: there’s no such thing as being certified in artificial intelligence - it is not (yet) a regulated space through which risk measures are calculated for insurance purposes). These certifications might signal they have an affinity to the space, but not often more experience than academic exercises. Some exercises are good, but most only aim for generalized or best-case applications. For the certification to have more value, it should come with some additional metric (for example, hours in a live setting, apprenticeship, etc).

Lastly, you’ve got folks who will go the route of learning things themselves. The connected age has certainly made this easier and more accessible than any other time in history. However, proof-of-work is even harder for many spaces. Software languages are probably best applicable in this wise, as standing up an environment and being knowledgeable on how to debug are easy enough to assess. Conceptualization is harder to assess - how people think and then formulate into accepted methods, that is. Project and program methods are harder to do in this case without avenues such as freelance, pro bono, or similar practices. And even this is still difficult, as there’s a significant difference between working with a small client (1-3 persons) and working with a larger one (500+).

All of this makes conversations around artificial intelligence, machine learning, and applications of these difficult. Sure, anyone can write a prompt. And folks who have learned how a model was invented can develop complex prompts in order to get a particular output. But this isn’t “how AI works” any more than someone who is in the back seat of the bus saying they drove across country.

A few people can take the outputs of generative platform and debug it - but this too isn’t AI (or ML, or genAI). It is debugging and bits of applied/manipulated information architecture. Using the genAI features of an application to come up with assets, artifacts, or dashboard variants? Nope, also not showing anything more than an ability to use the feature - you aren’t actually changing the feature by turning the knob.

Being able to understand temperature, weights, and use Python, R, etc to manipulate the data or the model? Now, we are talking about actually dealing with aspects of intelligence modeling. Shaping a model to correcting the labeling assets (but not touching the data to do so)? Yea, this is exercising lessons of machine learning and language modeling.

So, where does that leave how you interpret the waves of people who believe they are or aren’t doing “AI right?” I’m sure you can plug into any of these services to get an answer, but after it’s evaluated, who is making the decision that it is a display of intelligence or a product of someone’s web of influence?