Good afternoon, and happy easter!
Before I get into the roundup stories, I’d like to say that I hope everyone has an excellent easter bank holiday and enjoys the (hopefully) good weather.
Let’s get into this week’s digital marketing and SEO news.
Google’s CEO On The Future Of Search and Bard
This week, Sundar Pichai, the CEO of Google, talked about AI innovation and the impact of Bard on Search. He spoke about it on a podcast called “Hard Fork” from the New York Times.
The conversation with the hosts primarily discussed AI safety, the future of search, and advanced AI tech.
Pichai spoke about Google Bard‘s launch. Bard is Google’s rival to Chat GPT and is powered by Google’s AI, LaMDA.
He stated that even though the reception to the test build has been somewhat quiet, a more capable version would release soon.
Assistive and generative AI tools such as Bard have been envisioned to be a part of people’s daily lives.
Bard and Gmail integration
At the moment, Google is testing Bard integration with Gmail with a limited number of users.
“You can go crazy thinking about all the possibilities, because these are very, very powerful technologies. I think, in fact, as we are speaking now, I think today some of those features in Gmail is actually rolling out now externally to trusted testers — a limited number of trusted testers.”-Sundar Pichai
The advanced AI race
Pichai stated that he was surprised by the positive user reception of ChatGPT so far, and commended OpenAI for making good progress with advancing the tool.
AI incorporation into search
Microsoft’s CEO recently made some comments about how they’d challenge Google with AI tech in search Pichai responded to the comments by stating that the company has incorporated AI into Google for years.
“I would say we’ve been incorporating AI in search for a long, long time.
When we built transformers here, one of the first use cases of Transformer was birthed, and later, MUM. So we literally took transformer models to help improve language understanding and search deeply. And it’s been one of our biggest quality events for many, many years.
And so I think we’ve been incorporating AI in search for a long time. With LLMs, there is an opportunity to more natively bring them into search in a deeper way, which we will. But search is where people come because they trust it to get information right.”-Sundar Pichai
Pichai stressed that for all the innovation Google wants to do with AI, they will continue to be responsible.
Bard was used as an example, it was explained that Bard has deliberately not been connected to the most capable LaMDA models.
A balance between innovation and responsibility needs to be achieved for any big-tech company.
Google Search’s future
Pichai discussed what the future of Google Search could look like. He offered the idea that the search bar where you type in search queries could be transformed into something resembling a command-line interface.
A user would type commands to perform various tasks, rather than just using the bar for searching. Google wants to assist users in a way that makes sense, but without becoming the solution for every interaction.
“I think I want to be careful where Google has always been about helping you the way that makes sense to you. We have never thought of ourselves as the be-all and end-all of how we want people to interact.
So while I think the possibility space is large, for me, it’s important to do it in a way in which users use a lot of things, and we want to help them do things in a way that makes sense to them.”-Sundar Pichai
Stick with the Intelligency weekly roundup to learn more about how Google transforms search for the future!
What is Dolly- the ChatGPT clone?
On the topic of AI, Bard, and ChatGPT. Last week saw the announcement of a new open-source AI chatbot called Dolly, a clone of ChatGPT.
Databricks enterprise software announced a new AI language model called Dolly Large Language Model, or DLL for short. The name Dolly is a reference to the first animal that was ever cloned successfully, a sheep called Dolly.
Open-source language models
DLL is one of the latest iterations of the currently growing movement of open-source AI. It aims to offer users and developers a greater level of access to AI technology in an anti-monopolisation effort. Open-source AI ensures that powerful AI technology isn’t just controlled by large corporations.
An open-source basis
Dolly was actually created from an open-source language model called the Alpaca model. Alpaca was created by Stanford University and is also based on an open-source language model called LLaMA from Meta.
LlaMA was trained on publicly available data and can outperform most of the top language models such as GPT-3.
An improved dataset
Databricks has shown that if you use a smaller but high-quality dataset, you can still make a very powerful language model.
“Dolly works by taking an existing open source 6 billion parameter model from EleutherAI and modifying it ever so slightly to elicit instruction following capabilities such as brainstorming and text generation not present in the original model, using data from Alpaca.
…We show that anyone can take a dated off-the-shelf open source large language model (LLM) and give it magical ChatGPT-like instruction following ability by training it in 30 minutes on one machine, using high-quality training data.
Surprisingly, instruction-following does not seem to require the latest or largest models: our model is only 6 billion parameters, compared to 175 billion for GPT-3.”
Once Dolly is open to public testing, we’ll be able to cover more stories about it!
The new features for live streams on YouTube
YouTube announced that it will be adding new features for live streams held on the platform, such as live reactions. As well as this, creators will be able to see the types of content their viewers watch on other channels.
Let’s learn some more about each.
YouTube plans to roll out live reactions to live streams, which will allow a viewer to react and see how other people have reacted during specific moments. The company stated that this helps foster a community for a creator on the platform.
On iOS devices, you’ll be able to choose from a set of reactions in real time during a stream. Creators and viewers will be able to see the reaction, but not the user behind it.
If you’re a channel eligible to live stream, this feature will be turned on by default. However, you can opt out of live reactions if you so choose.
Improved live stream management
Two new features will help creators manage their live streams better.
- Ads automation– This feature allows YouTube to insert an ad into the stream at the time it feels it’s most appropriate.
- Live control panel– Creators will be able to see stream stats as well as ad-serving capabilities.
To access the panel, a creator will need to paste the panel URL into their third-party encoder such as OBS.
YouTube aims to assist creators by helping them come to informed decisions about the formats that they publish on the platform.
To do this, the platform is trialling a new card which shows a creator the top formats their audience watches on other channels.
It will break it down by showing the percentage of videos watched vs shorts or live streams.
While Intelligency doesn’t stream on YouTube, it sounds like a great time to start if you’re a creator looking to get into streaming.