Artificial intelligence (AI) is changing. But let’s not forget where we come from. The early concepts of pseudo-perceptual intelligence that permeated from mainframe laboratories in the 1950s may have been too early for the processing and storage capabilities of the time. It may have been superseded by the “movie AI” of the 1980s, but it’s only since the Millennium that we’ve started to see real progress, and IBM Watson has made a decent (and (more than that) has started to attract attention.
Of course, AI is now changing again, and it’s not hard to identify why. The rise of generative AI (gen-AI), which draws from large-scale language models (LLMs) running on vector databases, has been absent from technology news all year.
Sharper, more sophisticated AI tools
But as we move into a new year and perhaps some of the noise and hype dies down, what’s next with AI is all about refinement and tool development. So what we’re working on now is creating sharper language models tailored to your industry, task, or function. Particular work…and what we’re working on now is creating sharper tools for software application development professionals to incorporate new kinds of AI into their applications.
Google famously ended a year of Gen-AI turmoil with the launch of the Gemini Large Language Model.
Before we consider how Google is positioning Gemini to reflect current trends, let’s pause for a nanosecond and remember what we just said. So the IT industry isn’t talking about high-level AI engines and models, it’s the big names in the tech world talking. We’re not just focusing on a new AI-powered app that orders new milk when the RFID-tagged carton in your fridge has a best-before flag… I’m not talking about AI widgets. on our smartphones. Instead, we’re excited about new lower-board-level data science approaches percolating upward to deliver better AI. As we said, AI is changing.
Fanfare aside, what we see here is that Google is very reflective of its need to sharpen and improve its AI at this stage. Technologists want AI tools that ingest all types of data and work in a variety of post-deployment scenarios. Google knows this and built Gemini to be “multimodal”, allowing it to capture information not only in text format, but also in the form of images, audio, and video.
Gemini triplets
Although we usually think of Gemini pairs as twins, astrology terms At least this Gemini is shaped and scaled as a triple pack. Google says that by creating different versions of Gemini, it can “run efficiently” on everything from data center-level cloud deployments to mobile devices. To enable enterprise software application developers to build and scale with his AI, Gemini 1.0 is optimized in three different sizes.
- Gemini Ultra: The largest and most powerful model for the most complex tasks.
- Gemini Pro: The perfect model for a wide range of tasks. It may be rude to call it multipurpose, but you get the point.
- Gemini Nano: As the small name suggests, it is the most efficient model for tasks on the device.
With the interests of real-world software developers at the forefront, the company is now the developer of Google AI Studio, its development environment designed to allow programmers to integrate Gemini models through application programming interfaces (APIs). has confirmed that Gemini Pro is available via the Gemini API. Develop prompts when writing code to build generative AI applications. It is also available to businesses through Google Cloud’s Vertex AI platform, as described here.
Why is Gemini available for both routes? The API option via AI Studio is a free web-based developer tool designed to encourage usage and interest. Google says that once programmers are ready to use its fully managed AI platform, they can migrate their AI Studio code to Vertex AI for additional customization and Google Cloud features. It’s paid, but there’s no such thing as a free AI lunch.
Shaping AI for health
If the current trends that are shaping and sharpening AI (and generally allowing us to take scale as a given) are coming from Google’s work with these tools, this means that in the healthcare industry , is now available to Google Cloud customers in the US through Vertex AI, and the technology is expected to become more widely available next year.
The company is keen to be friendly as it seeks to encourage programmers to participate in its AI technology by providing further tools and assistance.according to Google’s own AI blog, “Duet AI for Developers is now generally available.” This always-on Google Cloud collaborator provides AI-powered code and chat assistance to help users build applications within their favorite code editors and software development lifecycle tools. It also streamlines running applications on Google Cloud, and Duet AI for Developers provides businesses with built-in support for privacy, security, and compliance requirements. Over the coming weeks, we will be incorporating Gemini into our Duet AI portfolio. ”
What will happen next globally?
Reflecting (some say driving, some say following) trends across the AI industry, Google has worked to sharpen and shape AI, from how information is captured to how it is applied. However, there are (obviously) future challenges. Many of these technologies are available in all regions, but Google will roll them out in the US first, followed by Europe (and the rest of the world). So there are broader questions about the future in terms of elements of international expansion and perhaps governance.
While we’re referring to the healthcare industry, there are also efforts to bring Google Duet AI to the security operations (SecOps) space and make generated AI generally available to defenders in a unified SecOps platform. This is great for security teams, but there are many other technology engineers in a) operations teams and b) the broader IT department who can be part of the generative AI movement and work at the same time. (a pun on software parallelism). their colleagues.
Artificial intelligence is changing and will continue to change. Many believe this year of generative AI stands out among the rest, but let’s hope developers get the right tools and we’re not hallucinating.