Reading, Recognizing, & Lenses
Years before the genesis of Avanceé, there was Mobile Ministry Magazine, a website and magazine geared to assisting faith-based groups better understand and apply mobile and other connected technologies. The challenge then was it was a perspective much further out than some groups were willing to consider. Yet, it was heard in time, and the audience grew and sparked several other movements (many of which still heavily influencing faith and connectivity today).
In its nadir, MMM began talking about some of the next maneuvers of connectivity and the faith. Though not spoken about directly, machine learning figured heavily into that view. This too was a viewpoint much further ahead of where many in the space were attending. It was also beyond the scope of MMM to speak towards those areas (even if it did demonstrate what’s possible from the mobile-appendage and connected services). Since MMM’s closing, it has been interesting to see others experiment towards a life beyond mobile also.
Bible Lens is a new application from the folks at YouVersion. It uses a combination of images taken on the mobile device, along with YouVersion’s dataset containing Bible references, cross-references, topical subjects, and more to create an image-text mashup. It uses what it can discern from the image against that dataset in order o create this image-text mashup. In doing so, YouVersion hopes that it will highlight to the user (and then to whom the user shares) the connection between the lives they live and the Biblical text they follow.
YouVersion began as a bible reading application. It was their intention to do what other bible applications had failed to do — increase reading (and therefore knowledge and application) by non-pastoral persons. Bible Lens seems to take that vision and push them into another direction: from encouraging reading, to now recognizing. This sounds like a shift of mission, however, it is an acknowledgement that contextualization is more difficult than just assigning literacy. Comprehension happens within a context of how ones lives, and it seems YouVersion is aiming to use machine learning to connect the images we take of our lives, to the codecs we use to navigate. This can be dangerous (there are several cannons of the Christian bible for example; does YouVersion cross-reference all of them, or just a set the user selects, or a set they select). This can also be advantageous to developing better machine learning models (image and non-image based) which enable other types of filtering, contextualization, and even mashups not even imagined.
YouVersion does recognize however their view of reading isn’t the end-point for literacy. This machine-assisted viewpoint is something many industries are coming to, and not all have been able to imagine well enough to navigate, or be motivated enough to move once imagined. What’s outside of the box for them was taking reading from their container to the images people have on their connected devices. From there, they added value by connecting their lives to the codes which bind them together. How might other companies figure out similar? It might be as simple as going from reading to recognizing what else lies outside of their boxes. At least, that’s what Avanceé aims to help groups realize.