TAVA Discovery – how we started it

We want to create a productivity tool which will help knowledge and ideas grow and get validated.

September 20, 2023

That will be a longer story – TAVA Discovery’s story that goes back a few years, in some cases almost 25 years back but the main points are a little over 4 years in the making actually.

I have been creating business software systems since mid 90’s. Yes, been dealing with software systems creation, optimization, and improvement as well as IT innovation in general for quite some time now. I have been always fascinated by the mere fact that the world is producing, recoding, and storing vast amounts of information on a daily basis. With the advancement in technology, both hardware and software, and the cost of personal and business IT systems becoming more and more affordable, the amounts of created data, information and content have exponentially grown in the last 2 decades.

That is all great, the more we know, the better, right? Well, not entirely correct. Generally, the statement is very logical, the more information and data we have, the better conclusions and solutions we would devise. The information and content creating and publishing juggernaut called “the Internet” is pumping up millions of terabytes of data per day! Content, information, data for everything, absolutely everything is being constantly created. The same applies for the data and information repository silos for any business enterprise. We live in an information rich age, if you are not following and accounting for everything concerning your industry, market, customers, growth perspectives, you cannot keep up in this harsh business environment.

Yet, that same innovation that led to this “useful information creation overflow” created its own impediments – the ability for individuals and businesses to obtain the proper and relevant information turned out to be biased, or expensive, or time consuming to find and process, or limited by format and usage, or limited by time. The very goal and drive to create, store and use data for our future scientific and business needs are burdened by our inability to rely on the best and most relevant sources and data pieces because our access to that relevant information is tripped by the very methods and business models of collecting, finding and processing that same information we constantly need.

When we get to Google, or other similar sources, and we search for something specific, very rarely do we get what we really need (unless it’s a new TV we are searching for and we get the links to the highest bitters of those ads there) – the most important or newest info – but what we usually get is what google and the like want us to see, we get what Google’s advertisers want us to think is relevant for us. Similar model for the data and information collection is in any enterprise too – the business repositories are filled with data which is either collected by business researchers using google-like tools, or by teams of business curators which use the same google-like tools and are paid to produce what might be relevant for that business, at that point in time.

How do we know that information and data is still relevant? Especially now, with the introduction of the AI? There are multiple factors and parameters which make the information we use and need relevant. I will try to share my observations and experience in dealing with scientific information and business data systems, where we excel and where we fail as technology community to make the most of all the “miracles” the modern IT innovation and advancements gave us within a series of articles. This is the first one and it will probably reach a small number of friends, colleagues and followers, but these articles are actually applicable to the business lives and endeavors of over 70 million knowledge and learning workers in North America.