We're always pushing the limits. From our roots in open source to mastering the cloud, and now we're doing it again, but this time we're all about Generative AI!
To make sure we got the most out of this new adventure, we organized a super practical internal hackathon focused on generative models and their applications. We took real-life client cases and tackled genuine business challenges to give our team the most hands-on experience possible.
AI being our domain, NLP is one of our specialties, we used GPT models long before it was cool, primarily for analyzing customer reviews.
But the new wave of generative AI models changed the game and we wanted to superspeed our learning experience by doing it. We spent two Saturdays in a row, working as a team on exciting new ideas, talking about actual client projects and real-business implications. And what was the most exciting part - than actually building it.
During our internal hackathon, we focused mainly on integrating the family of GPT-based models into our current or backlogged client projects. We are aware of the concerns surrounding data privacy and security and the fact that OpenAI is not full to be a safe third party - on the contrary. It's using that data to train its models. That's the reason why most corporations have banned it outright. On the other hand, Azure guarantees that this data will remain the customer's only and will not be misused in any way. In light of this, we decided to leverage Microsoft's Azure platform in our solutions. The setting up infrastructure in Azure was a matter of minutes, not days as it used to be with on-prem solutions.
We discussed several projects, choosing 2 particular ones to develop. And with a joined effort of the whole team and great help from the cloud, we were able to come up with 2 working prototypes.
The first one addresses the needs of many of our customers - giving the domain experts access to data.
“Ask your data” app, can search your analytical data sources and provide answers to questions that would traditionally need analysts to answer, such as “What is my churn rate”, and “Are there any opportunities for cost optimization in the IT department?”. The second application is aimed at harnessing the power of natural language processing in providing seamless access to an internal knowledge base, predominantly composed of unstructured textual data. This essentially boils down to two major components:
- Automation of Reporting: This involves eliminating the need for complex model composition, creation of dimensions, metrics, and visualizations. Instead, users will be able to obtain the required information through a simple query. This approach is hinged on the premise of well-described data or 'Data Governance'. The AI system would be trained to understand this data, transforming it into useful insights on demand. Thus, even though the data might seem like "a bunch of laughs" in its raw form, the model is able to sift through the chaos and deliver meaningful responses.
- Navigating Regulatory Landscape: Large corporations often grapple with a multitude of guidelines and regulation documentation. The traditional ways of managing and making sense of this data are not only time-consuming but also prone to human error. Our solution, however, brings an entirely different approach to the table. By integrating our system with the company's internal knowledge base, we enable users to ask natural language questions and receive precise answers from the vast and complex documentation.
While Generative AI is only a small portion of AI research, the potential of this technology for our clients is huge and growing with its maturity. The first, and certainly not the last, BigHack taught us what we already knew: Generative AI provides a way to democratize access to advanced analytics. Generative AI is here to stay, but you still need good data sources to feed it, and business consulting expertise to benefit from it.
Stay tuned for more updates on our journey into Generative AI!