Bard can now connect to your Google Apps and services
Comments
Still waiting for bard to add an api so we can start building tools based on their model
Very tangential, but I found this funny:
> For example, if you’re planning a trip to the Grand Canyon (a project that takes up many tabs)...
Is number of tabs considered a reasonable estimate of a project's size/complexity/scope/duration these days? If so, I'm wondering whether we could start using it instead of story points?
Number of tabs seems like a pretty good measurement in my cases! Much better than completely arbitrary story points anyway...
Getting accurate results about half the time searching my own content. Probably harder for me because I have over hundreds of thousands of messages in gmail and tens of thousands of files in my drive.
example of successfully retrieving a PDF: https://i.imgur.com/Y6cSlCx.png
For someone already invested in the Google suite of products (gmail, docs, etc), this sounds pretty useful.
Also, this part seems especially interesting:
> Starting today with responses in English, you can use Bard’s “Google it” button to more easily double-check its answers. When you click on the “G” icon, Bard will read the response and evaluate whether there is content across the web to substantiate it. When a statement can be evaluated, you can click the highlighted phrases and learn more about supporting or contradicting information found by Search.
The biggest problem with all LLMs at the moment is the frequency at which they are wrong (at least when they are used like an internet search to lookup factual info). Any LLM that can improve this (or as in Bard's case, make it easier to detect wrong info) is likely to gain traction.
So we can validate Bard using blogspam SEO articles, eventually having those written by Bard or ChatGPT? It's a LLM ouroboros
* the snake eating it's own tail
This feedback loop, when you extend LLM to the horizon, is my primary point against this approach. When 90% of the new training data is from content (a worse version of) it previously generated you get a negative feedback loop to zero quality.
Bard is especially bad right now, at least compared to gpt4. Once they move to Gemini it will be interesting to compare, but until then things like "when does the new Staind album come out" have to be answered the old way with Google search and reading.
I just asked it your question and it seemed to give the correct answer? Also, there's now a built-in button to check its answer against an actual Google search and the UX is pretty slick.
Did it say September 15 or 18? (both answers I've gotten from it over the past month or two). If so, that's wrong. It's September 22. As sibling pointed out though, there are apparently web pages that have it wrong so it's probably not Bard's fault. I also wouldn't be shocked if the date changed and was originally 9/15
This is what Bard told me: "Staind's new album, Confessions of the Fallen, was released on September 15, 2023." If that's wrong, it's because I also see web pages that have it wrong.
Yep it's wrong, it's coming out on September 22. I don't doubt that some web pages have it wrong though, and it's hard to fault Bard for that.
Yeah, hard to blame Bard for getting that wrong when there's tons of webpages that seem semi-legit that have September 15th. Searching confessions of the fallen "September 15" 2023 yields hundreds of thousands of results which at least for a cached version has that date.
https://allmusicmagazine.com/staind-release-new-single-in-th...
Comparatively, ChatGPT says "I'm sorry, but I do not have access to real-time information, and my knowledge only goes up until September 2021. To find out the release date of a new Staind album, I recommend checking the official Staind website, social media profiles, or reputable music news sources for the most up-to-date information."
So which is more useful, one that doesn't even know there is a new album coming out, or one that knows what was its release date as of just a couple of months ago?
> So which is more useful, one that doesn't even know there is a new album coming out, or one that knows what was its release date as of just a couple of months ago?
To semi-misquote Lewis Carroll: Which is better, a stopped clock or a clock which loses a minute a day? Carroll posits the former, as it is precisely correct twice a day. The trick, of course, is knowing for sure when those two times per day will be.
I'm sure others will feel differently (and I'm very eager to hear from others with different views), but for me there's not that much difference in usefulness between a model that's wrong 5% of the time and one that's wrong 25% of the time. Both models require manual validation 100% of the time.
I certainly feel different. A 5% error rate, to me, is akin to branch prediction, where I occasionally need to rollback but it's infrequent enough that it doesn't pose a significant issue.
However, when it reaches 25%, I find myself meticulously verifying everything beforehand.
Do you feel confident in your ability to detect when the LLM is wrong? For me personally, I don't have the confidence to do this , which is why I feel like I need to verify everything, even in the 5% case.
If it is writing code, I still have to verify it every time. Then again, that is also true when I write code.
Hopefully my rate of error is way below 25%, so 25% error might not help me that much.
I'd like to think my rate of error is less than 5% too, but that might be close enough that outsourcing the work is still worth it.
Maybe still not worth it if I were that hypothetical programmer that writes lots of code but never writes any bugs.
For me it comes down to "how important is it to be right"? For many of the queries, it's not that important and if I lose the 5% gamble, it's annoying but ultimately inconsequencial. If it's an important thing to be right about, then I will verify it from both.
I don't know how else to ask this: how are you so okay with disregarding accuracy?
What questions are you asking where you don't care if the answer is wrong? I guess I just fundamentally don't understand what the point is. Why not just bookmark the "random article" link on Wikipedia if it doesn't matter anyway?
A significant subset of my questions are basically, 'here's the weird statistics thing I would like to do; what are the academic jargon keywords I should look into?'
All of the LLMs are pretty good at this, and when it hallucinates a response it's totally a decent indicator that there isn't a lot of literature on the subject.
You have to be okay with being wrong regardless. The little snippets after a Google search are sometimes wrong. The blogs or links can be misinformed (For example, most seem to be wrong about Staind's new album release date).
The idea of perfection is silly because it doesn't exist LLM or not. You're not going to get it so it's a matter how often it's right.
[dead]
For summary extractions I’ve found Bard to be miles better than ChatGPT of late. Waiting for some of my dependent services to add api support for Bard… I’m guessing this will go back and forth a few times over the coming years.
Now this is actually useful. There's a lot of good information in my Gmail but searching it is such a pain that I hardly ever do.
I just asked Bard for the date of an upcoming event and it did the search for me and found the right answer and summarized it with extra detail and references. This is the only reason so far that I'd go to Bard over ChatGPT.
It did treat the @Gmail part as part of the query words though, which is weird. I think it won't be ready for mass consumption until it can decide for itself when to search Gmail or Drive with no weird keywords necessary.
I still think chat is not all natural for humans to interact with computers, mainly because most people are not actually good at phrasing their needs and even if they are, typing it for the LLM to understand takes so much time.
Thanks to QWERTY keyboards, our keyboards are not efficient for typing either.
The existing paradigm is temporary. AI will improve to infer more meaning from what users want, increase their ability to perform it, and chat systems will eventually switch to optional voice input. You tell the system what to do, and it'll perform whatever steps it needs to. I can't imagine anything more natural than that, for the everyday user.
I'm sure things will get more esoteric, for the experienced computer user.
I don’t mean to be mean but this is an example of what bugs me about the AI hype cycle.
Person has a valid criticism, the answer is a vague, wishy washy “it’ll improve and be able to provide all things to all people in exactly the way they need it”
I don’t know what I’m supposed to do with that statement. What are the concrete steps from here to there? What’s the timeline?
Looking at the difference from GPT-3.5 to GPT-4, you can see these features are already appearing. GPT-4 can do more, understand more types of data, infer more context, and craft more advanced responses. With the right prompting, it can even ask clarifying questions to handle ambiguity. Most of the time when people talk about the limitations of AI, they're talking about limitations that are _very_ close to being improved upon. Context windows are a weakness, but they're growing, and memory features are being developed. Intelligence is improving. Understood data types are expanding. My guess is that GPT-5 (and equivalents) will understand voice natively, instead of needing a speech to text model in the middle.
There are absolutely serious limitations to existing AI, but the criticisms mentioned here aren't where we're stuck.
There's nothing he says that isn't already here though. Bing, Bard all have voice input. and even the technology for more native speech abilities (so not just STT) is already here - https://google-research.github.io/seanet/audiopalm/examples/
They're already able to infer a great deal from context. They will improve simply because we've not hit any scaling walls yet.
Yes, being able to voice chat with AI is interesting, and we can already do that (e.g., using voice-to-text).
> AI will improve to infer more meaning from what users want
This is true. It already knows what it should do when I give it a piece of code and say "" (the bug emoji) or "spot the bug". If you give it the error code, it already knows it has to find a solution.
But for non-programming tasks, it'll be a challenge to infer what the user wants because language is vague and people are used to speaking ambiguously and clarifying only when the other person doesn't understand.
Yes, I agree it should be faster, 100ms between question and concise answers. With standard phrasing of needs and wants like Cucumber or Non-Violent Communication and a Dvorak keyboard layout or a shorthand input system (yash or bref) you can go a long way. For standard stuff you could always drop to short ruby like syntax
Depends what you are trying to do with a computer.
Natural language is a pretty brilliant way to interact with a computer if you want to learn a foreign language, but it’s probably not great if you are wanting to play a first person shooter game.
One thing is clear though: it’s not one size fits all.
I'm a bit concerned about one of the privacy terms. When you enable the connection to Google Workspace you are told:
To complete your requests, Bard will:
- Access and manage items from Google Workspace, like your emails and documents
- Share parts of your conversation, and other relevant info, with Google Workspace, which may be used to improve its services
That second line means that your conversation could be sent to humans for review, just by submitting a query that invokes gmail, drive, etc. This is in direct contrast to the terms of Bard itself which won't store, use for training, or allow human review of conversations unless you explicitly submit feedback about said conversations.
This amounts to "If you turn on Google Workspace any of your conversations could be seen by humans."
I interpreted their privacy statements for Bard extensions differently. I thought they clearly stated that no humans would read your content and your queries would not be used for training.
EDIT: someone else here mentioned that Bard results may be viewed by Google developers, so I might be wrong in my statement.
Ahhh yeah, that is what I want. Access to my data for a corporate algorithm that can hallucinate and accuse me of anything.
I asked for flights from Poland to Cancun, Mexico in May 2024. It told me I can with Wizzair (the operator operating in Europe only) with stopover in Kyiv, Ukraine (lol).
Maybe it can connect to Google Apps but can't give reliable results.
Maybe it knows something we don't that'll happen in early 2024?
Has anyone managed to enable this on Google Workspace (business) accounts?
Isn’t Duet AI for that? My understanding is that Bard is for consumers while duet is for workspace.
Unlucky. Yet another case where Google continually burns individuals who pay to use their own domain.
How can I disable Bard from crawling my stuff?
I haven’t used a single bard whatever google has released yet. Their marketing dept is really shtty. When I go bard.google.com it does not allow any sort of usage
I just used it for 15 minutes and I like the direction Google is going with it. Once you turn Bard extensions on, you use the “@“ character to get a pop up list of services like GMail, Google Drive, and many others; choose one, then ask your question.
First steps, and I look forward to seeing future improvements. I wonder how they will monetize this? I was just using it with my free GMail account.
Both Microsoft, with Office 365, and Google have the customers and web properties that can make good use of new types of LLM applications.
Extensions don't seem to work with business Workspace accounts as of yet.
Bard is so terrible. I can’t imagine anyone is actually using it based on my experience.
In my experience it's better than GPT3.5, not as good as GPT4.
I have the integration with google search enabled and now I only use GPT very rarely. Having it at the top of the SERP with zero effort is much more convenient. The fact that it gets many things wrong becomes irrelevant when it takes no effort on my part. Even when it is totally wrong, its feature to intersperse the output with links to its sources is quite nice.
I'm curious, is it better then Bing Chat here (not comparing the layout)?
What I am saying is the layout is the benefit. On Bing I have to switch over to Chat and wait.
Filter is too strong to be funny, I know you can answer it Mr. AI, but some skinny 20-something year old told you not to in the name of "ethics" (I only use AI for my amusement)
"Extensions" and integration into the rest of the Google ecosystem could be how Bard wins at the end of the day. There are many tasks where I'd prefer an integration with my email/docs over a slightly smarter LLM. Unlike ChatGPT plugins, Google has the luxury of finetuning its model for each of their integrations.
The new feature for enriching outputs with citations from Google Search is also pretty cool.
No doubt about it. Google isn't competing directly with ChatGPT, but is betting that having a small fine-tuned model "close to the data" will dramatically cost-outperform a huge general-purpose LLM. Less resource-intensive interference, less prompt engineering (less noise).
Yes - but the big question is this: how to monetize this? This might completly kill search and ads.
By showing ads... They'll figure out units that look native and history will repeat itself.
Workspace is already monetized
Yes, exactly. Integration is where the real power of these agents can live.
I really want an agent that can help me with pretty simple tasks - Hey agent, remember this link and that it is about hyper fast, solar powered, vine ripened, retroencabulators. - Hey agent, remember that me and Bob Retal talked about stories JIRA-42 and JIRA-72 and we agreed to take actions XYZ - Hey agent, schedule a zoom meeting with Joe in the afternoon next Tuesday. - Hey agent, what did I discuss with Bob last week?
Something with retrieval and functional capability could easily end up being easier to use than the actual UIs that are capable of doing these kinds of things now.
This seems incredibly dangerous. You're one typo away from having the entirety of your private life exposed by bard.
And based on their extremely loose privacy policy, I can just imagine Google pitching this to advertisers for "targeted marketing". "Hey Bard, give me an email template to manipulate tw04 into buying my product".
Also, I find it more than a bit disingenuous that the privacy policy on bard.google.com links to their generic privacy policy, not their BARD privacy policy. And after reading the real one, I understand why:
https://support.google.com/bard/answer/13594961?hl=en#your_d...
They will use all of your private data for advertising, and a human will review the data fed into bard. In other words, all of your private information is now reviewed by a human as they see fit. Yuck.
>Please don’t enter confidential information in your Bard conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.
Maybe put that one front and center on your bard page, not buried on a completely different website....
> All of your private information is now reviewed by a human
FTA:
> If you choose to use the Workspace extensions, your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads or used to train the Bard model
I’m as big of an AI skeptic as anyone but even I believe you can set up an integration of personal information that doesn’t leak publicly.
>I’m as big of an AI skeptic as anyone but even I believe you can set up an integration of personal information that doesn’t leak publicly.
I've been using google products long enough to know that when a blog post differs from the policy language, the blog post always loses. Let me know when they update the actual privacy policy to reflect the blog post.
If you want to earn some credibility then you could refer the exact privacy statements that conflicts with this blog post.
In my experience, there's a message with every new Bard conversation that links to the page you linked to and says "Human reviewers may process your Bard conversations for quality purposes. Don't enter sensitive info."
"Bard conversation" can be different from "private information in Drive and Gmail".
Someone asked[0] what happened to Bard just 10 hours ago (or 8 hours if relative to this posting).
Yup, maybe Google saw my post and this is their response!
I was thinking...maybe Bard read your post and decided for itself.
Makes me wonder, are they using everyone's data to train and personalise Bard? It must be incredibly tempting to use Chat AIs to persuade you to buy products advertised by Google in a way that you will perfectly respond to.
I was curious about that and the linked blog post answers that:
> If you choose to use the Workspace extensions, your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads or used to train the Bard model.