Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am actually shocked to learn that x.ai has human beings reading the emails.

X.AI's marketing materials state it's "An AI personal assistant who schedules meetings for you."

Their default tagline in every email sent says "x.ai – artificial intelligence that schedules meetings "

You have to dig into their press kit to get any mention of human "Supervised Learning".

My typical interaction with Amy has been a 3rd party suddenly CC'ing her (it) into an existing thread.

In many of those cases those threads contain information that I would consider to be confidential.

In some cases the people on the other end work for public companies that I am POSITIVE would not allow for a non-approved human to have access to that information.

I understand the counter argument that x.ai's warehousing this info regardless, but introducing a human under the guise of a blind-AI is unsettling.

It's a disingenuous pitch and should have repercussions if they don't improve their disclosures of what the service is actually doing.



It's kind of like Uber. Before the had the AI worked out, they were just paying people to drive all the cars.


Isn't this typical of the "Fake it until you have product/market fit, then automate" ethos?


Pitch:

"x.ai – artificial intelligence that schedules meetings"

Reality:

"A group of low-paid humans (that may-or-may not have been background checked) will read your emails to help you schedule meetings. They will probably not use this information in any way other than intended."

Those feel like two different products that I would make fundamentally different decisions about.


> that may-or-may not have been background checked

thats funny, I'll pass the background check and still trade on the inside information, I'm short your house right now


Maybe, in the VCs minds, a "group of low-paid humans" is equivalent to "ai."


Shouldn't be much different. In either case you have no idea what the external person/AI will do with your data.


Yes and I think that works with many areas and in many products.

But when you have confidential information in the mix - especially stuff that might have SEC implications - it changes the game.

A few years ago, I did a contract with a company that had a system that deleted all email greater than a year old. While the official answer was that it was "to save space and improve network performance" I suspect the unstated reason was to prevent fishing expeditions.

If your email is being cc'd outside the org and read by actual humans, that introduces some awkward problems.. and may force people to admit the actual reason for the policy. ;)


Interesting, and probably very true.

Most email deletion policies are to protect companies from after-the-fact lawsuits. :-)


Meekan is a Slack bot that's actually 100% AI, takes natural language commands, and is used by 10's of thousands of real people. The Median time to schedule a meeting with Meekan is 53 seconds, which means he's delivering real value. Not trying to be an AGI, the bot looks in everyone's calendars and prefs and suggests the best time to meet. By that, he's trimming the huge decision tree into a small, ranked suggestion list that's easy to grasp and decide upon. An assistive tool that saves tons of time and frustrations. Nothing less.

[note:I'm the product manager at Meekan, https://meekan.com]


Just curious, but why do you refer to it as "he", especially in this context where you are touting it as an actual non-human? Not to single you out, I see people referring to all these other things as (usually female) "her" and it just unsettles me to think that people are anthropomorphizing these things so much.

On a related tangent, often these same people will refer to animals as "it", too, which is even more curious. A living being with an actual gender doesn't rate a gendered pronoun, yet a piece of software does.


True. We started out trying to limit the conversation to just scheduling-related topics. But very early on we realized that since this is all happening in a conversation, people expect him to be able to carry some smalltalk - because when he doesn't, he's perceived as stupid - he can't even say "hello" properly, how can he possibly do scheduling? (if this is interesting, I wrote about it a few month ago in much more detail: https://medium.com/building-the-robot-assistant/cheating-on-... )


I wonder how many other "cloud" companies have stuff that users assume are one on one conversations or sharing, and aren't. Even something as simple as a dating site almost certainly involves employees reading private messages (a must because of scammers, spammers, and downright abusive users.)


If a third party is CC'ing Amy I would of course send her the unencrypted confidential information and think it's just an AI - they don't look at my data. What the hell.


It's completely disingenuous and is riding on the AI hype train. I've never once ran into a chat "bot" that wasn't human. It's ridiculous, completely misleading, and frankly waters down any real startups working towards that future.


> I've never once ran into a chat "bot" that wasn't human.

Well that's absurd. Siri? Eliza? There are multitudes of actual chat bots out there. They usually good to fool a good lot of the general population. But the tapestry unravels with just a little bit of effort if you know what to ask.


I was specifically thinking of X.ai and FB's M when I wrote that. I've met plenty of dumb bots.


Keep in mind that it's still in beta. User who are onboarded do get clearer disclosures.


Two parties to an email conversation. If one person is using "Amy" and adds her to the conversation, the other person may not fully understand the implications. Especially since their marketing tries hard to blur the lines.


To the outside party, Amy seems like a competent secretary. I'm not sure why they would assume "she" isn't.


Doesn't every email "she" sends say "x.ai – artificial intelligence that schedules meetings"?


Even if that is the case this third party would then assume that the secretary is "with" the other person. Not some other person that no one have ever talked to or met or etc.


You should always assume that employees of a third party are reading your messages and always read the privacy policy.


There's a huge difference between an employee monitoring the product to make sure things don't go wrong, and an employee paid to fill in for the main product. It's like a real-life mechanical turk.


If you are worried about confidential information there isn't much of a difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: