On Collaborative Products
Collaborative software is validated through collaborative experiences before its individual features
One of the challenges in training software and process is found in the disconnect of knowing the features of an application/service/device, and the context in which it wants to be best used by those being trained. In the past, it was single-user perspective software which opens to a collaborative element (for ex, MS Word to make the Report was by a single author, but “track changes” was meant to make the editing process inclusive of more eyes). Now, the world of productivity software has both that and collaborative-first software. The former can be learned features-first, but the latter, learning features first is the surest way to failure — especially when training/leading in new or infrequent spaces.
So, how does one get up to speed with collaborative products if the training industry still begins from a single-user, features-first perspective? Playtime. Intentional playtime.
Services such as Slack and Microsoft Teams are more or less useless without 3-5 people in the service concurrently. You need to not only have the largely, text-base conversations, but those who will use the app versus the website, those who grab add-ons to make aspects easier, and then a consistent-enough stream of activity. Now, if the latter doesn’t happen, then Slack/Teams becomes a wasteland. Or worse, your content management platform is another network file share, your collaborative word processor is no better than MS Word 97, and everyone wonders less about their competencies, and more about the focus of the technology leaders within the organization.
If something does take, you find new shapes of productivity forming, some of which has no present metric by the org’s existing performance standards. And at the same time, individuals will need to learn quickly how to manage the old way of doing things, and the newer ways the collaborative product has invited into the workspace. Groups start lifting styles of notifications, shortcuts to other features, or repackaging of binders/files/processes into a mastery of something more than what the outputs are — they strike towards a mastery of what it means to work.
A mastery of the features then isn’t even the mastery of the collaborative service — that’s just a mastery of a context within it. There are the secret commands, the bots, the use across other software platforms (for example, using Zapier to push info into and out of Slack based on triggers/commands). At that point, there’s enough mastery in to begin looking at teaching others how the collaborative product has value beyond their workspace. There’s only the words “innovation” here — risk and metrics are only defined by the features of what’s used, and the fear of what it portends.
So then, how can one evaluate the value of collaborative products if it needs others? That’s where your value system has to also update with the shape of the environment. Interdependent metrics such as friction to sharing, invasive/dismissive notifications, quality of communication, and resulting outputs are some measures. Should this be deployed to all groups? Maybe, depends on what value you think it will bring. It needs to be small enough to catch technical issues, and wide enough to get a range of users to identify gaps in what is and isn’t understood.
Expertise needs to be experienced — especially as it relates to the nature of collaborative software. Once it is, then there’s a canvas of possibilities towards its application to others.