A team that handles security vulnerability reports should never say "oh - that's another internal team. Go ask them...".
In fact almost any staff member inside an organisation that receives a plausible vulnerability report should ensure it reaches the right people. It's not something you should shrug off.
The "that's another internal team" reply was presumably more about bounty than vulnerability itself. Still, my contrarian take: support - whether external customers or internal stakeholders - is a game of hot potato: first person that fails to forward it to someone else will get burned.
It would be great if everyone was happy to drop whatever they're doing and lead resolution of customer's complaint, regardless of who the actual empowered/responsible person/team is. Alas, we live in the world where most people subscribe to Copenhagen Interpretation of Ethics. In this world, even forwarding a request to those responsible is dangerous. Anything more than that entangles you with the problem, meaning you'll be held responsible for it, no matter your actual connection to it.
We can call it "principal-agent problem", or just "survival in the world where requesters are hunting for anyone willing to engage with their requests".
(Source: I used to be the one willing to handle any internal request even tangentially related to my work, until my line manager told me to ask requesters for project ID or billing code before giving any help that requires more than 1 minute, because otherwise I'll end up doing none of the work we're actually being paid for.)
> support - whether external customers or internal stakeholders - is a game of hot potato
I’d like to shift this a little:
Support who’s primary metric is handle time is in a game of hot potato.
From a business perspective the managers and leaders always feel like there’s too many fires which inevitably leads to either pressure on front-lines to “go faster” and “stop doing unnecessary work” (aka “taking time away from the fires”) or some level of management that’s intentionally blocking higher-ups from seeing those fires so that they look like they are managing the department well (and in this case not only is there the same pressure on the front-lines, but there’s additional pressure about not reaching out to anyone except through that manager.
When the primary metric is handle time, the issues pile up, there’s never enough people to handle it, and the business slowly sinks as no one with a budget sees the “ounce of prevention [that can prevent a pound of cure]”.
However: If the metric is minimum number of departments an issue touches before it’s resolved it’s a whole different thing. Suddenly playing hot potato is a problem and “problem ownership” is praised. There are other metrics too that produce different support cultures (and sometimes different games), but the reason hot potato is so popular is that those other metrics all require top-level execs to be comfortable with spending now to save down the road.
How is a question who’s edges are unique to the company but if you’re coming in to a situation like that one generally-solid approach is to accept that things will suck for a while, and might even be worse short-term, the prioritize chipping away at the underlying causes.
Something like reducing staff ticket time by 20%, then using that 20% for feedback, strategy, and structure. Some (maybe even most if you’re lucky) of the front-line staff will have enough experience and insight to be invaluable here (though it is likely they won’t have the language yet to express it in ways that make sense to management). As the company goes through the process of communication, discovery, awareness, planning, and execution (preferably with a tight feedback loop) some of the underlying causes will be addressed, the front-line pressure will ease off, the cascade effect will see ease-off in other departments as well, and that 20% can go down to 5% or even 0 (not recommended lol) which will further reduce the workload to give longer-lasting relief.
Then with staff time “out of the red” the company can start thinking about what they will do in a fire-free future.
lol first they came up with the phone menu. Then they came up with complicated phone menus meant to confuse you and get you to give up. Then they replaced those with voice recognition menus that are straight up infuriating. Then they replaced that with AI that tries to act like it's a person.
All because telling you to GTFO isn't an option. Typical customer support is thus a proof-of-work scheme: to get it, you need to waste some significant amount of time and energy up front.
Problem is once its easier to just bypass all that and find ways to bug higher level employees whose time costs the company siginificantly more. If more people realized that then the regular support would quickly be told to actually solve customers' problems.
Not them, but I worked at a University in their IT dept, and our chief metric was solely 'customer satisfaction' (and 'customer' meant 'faculty'). We almost never wanted to pass tickets, because that tended to make the 'customers' upset. They wanted white glove service, and that's what they got.
It also meant that some people had "forever tickets", that were a continuous series of tacked-on asks by the same faculty member. HigherEd IT can be crazy (and crazily-laid back).
I feel like my experience with big cos is that the "that's another team" might go like this:
Parental controls is essentially maintenance mode and has 1 dev nominally responsible for it, maybe their workload is divided between that and a bunch of other stuff that they deem their "real" work. The way the component works means that bugs typically get assigned elsewhere in the system very far away from parental controls; you, the owner of Contacts, land a bug like "Your feature XXX has the following failure in parental controls mode." The team responsible is like ... "Why do I care about this? Why should I take a code change for this? Isn't that your problem?" Whoever is responsible for parental controls might not care, but if they do, they don't have political leverage over the owner of the Contacts app or whatever. Therefore, won't fix.
Yes, and the worst part is I don't think it's even a side effect of organizational structure because I've seen it in so many places. There is just a quirk of human psychology where "if you touch a problem it belongs to you now," and the result is a situation where everyone would be genuinely happy and eager to help but nobody (except the newbie) dares try because the consequences for trying are immediate and dire.
This seems related to what I think of as the “jurisdictional hack.” Nobody can solve every problem, so you define a realm that’s your responsibility and anything outside it is someone else’s problem.
Keeping your jurisdiction small means you can do more within that jurisdiction, by ignoring even important problems that are outside it.
But the alternative is ineffective doomscrolling because all the world’s problems are yours.
By definition Google's board, and shareholders who elected them, can 'solve every problem' within Google, since they have the authority to wind down the company.
It's just that practically nobody in the world can credibly demand Google's board, or even just upper middle management, to make a decision on small matters.
Sure, but that's because they can delegate to lower management tiers.
When a ticket reaches your average IT guy, they can't usually delegate it to a lower tier employee unless there are formal support tiers, like L1 and L2, and the ticket was sent to one of the upper layer techs (which usually doesn't happen straight away, because of how L1 and L2 support teams work).
If this is not the case, the only way out is forwarding the issue to another team.
There’s a lot you can criticize Google about, but that would have extremely disruptive worldwide effects. Google is load-bearing infrastructure. Dismantling it would require solving whole lot of new problems.
This is how we have a really urgent problem, which I can fix in the non-production environments in an afternoon, only to have people emailing me for the next month asking why this extremely important problem is not fixed in prod yet.
Sorry guys, that’s another team. They’re extremely reluctant to deploy even very important things. Separation of responsibilities means we have no other choice.
> The "that's another internal team" reply was presumably more about bounty than vulnerability itself.
Yeah, that's my read. Basically the first line of support said "parental controls and screen pinning don't count as security boundaries", and the author is upset not because of an abstract argument about impact but because they want to get paid.
Should they be security boundaries? Honestly I'm mixed on this. First because the threat mode is totally different when the attacker is your teenager (i.e. who exactly is the harmed victim? The parent?).
But mostly because the whole idea behind bug bounties is to encourage disclosure of vulnerabilities that would otherwise be sold and deployed against the public at large. That is, the bugs have "value", and we're all better off if the purchase price is borne by the software developer than the criminal. There's no market for parental controls bypasses in that sense.
Are you thinking about something specific? What's the scenario where the public harm to a usage control bypass becomes more valuable to an attacker than the bug bounty?
(Edit: <sigh> than the bug bounty that the linked author desires. Really?)
Remember that both of these technologies don't allow the device to do anything it isn't able to do in its default configuration. They're essentially a form of DRM: disallowing otherwise useful activities because of the desires of the owner (and not the user). Would you demand, say, Apple pay a bug bounty for a DRM bypass that let people rip Netflix videos? Probably not, right?
You're totally right in the general case, but in the specific case of security vulnerabilities it makes sense for there to be an exception (even if the action taken is just to hot potato on your side).
Google is the king of "not my department." "No, I don't have contact with any other department within Google." "No, I don't have the email address of anyone on any other team in Google." WTF, Google?
So having worked there, this was absolutely true, and the parent complaint about hot-potato is also absolutely true.
The problem as I see it is that Google came to be dominated by an egalitarizing culture which at first wasn't necessarily a problem. This was an explicit choice by Larry and Sergey, that your manager should not be able to unilaterally fire you just because of a personal disagreement, nor stiff you out of financial rewards, none of that. So, your manager lacks any formal authority over your day-to-day work: they have to use politics and soft power. Instead, performance is reviewed by a committee of your manager’s peers, who can “calibrate” that manager’s opinion of you against others and against empirical data.
The result of being judged by a faceless committee is that implicitly, some things generate the empirical data that they look at, and other things don't. It's helpful to oversimplify this to a common currency of “perfcoin” Ⓟ even though that was never explicit at Google. Some activities generate Ⓟ, some don't. Google has built dozens of new chat apps because whenever you can have a good excuse for how this aligns with your business priorities, they generate lots of Ⓟ. The design documents are rich in Ⓟ, the tracking issues for each feature are rich in Ⓟ, getting the thing privacy-analyzed and internationalized can get you some Ⓟ, the inevitable work to merge it into another chat app is also worth Ⓟ. But please understand that the existence of Ⓟ is a result of semi-hierarchy. The manager exists (hierarchy) but has to point to an objective measure (Ⓟ) to say that you're not doing what you're supposed to (semi-), it is almost a mathematical deduction that this has to exist given that structure.
Now networking with people outside of your team, will never get you any Ⓟ. And this is not for lack of trying! When I was there it was a job responsibility to do some things that were not your job responsibility (“community contributions”) to try and associate Ⓟ with some form of networking! And everyone hated it, and it didn't work anyways. Manager-committees immediately decided that Ⓟ would not be awarded for excessive networking, just that you had to prove a little bit of networking or else Ⓟ would be deducted. Furthermore the most reliable community contributions were noncommunal—conducting hiring interviews being the easiest: probably this person will not be hired, but even if they are, you will never interact with this person ever again. But, you conducted N interviews in the quarter and that is just barely enough to not get docked some Ⓟ for being a shut-in.
I am giving somewhat of a negative portrait and it is not all negative, see Laszlo Bock’s Work Rules for the better parts. I'm just saying that the culture of not-my-department has been created by, and is sustained by, incentivization.
That's the ultimate cop-out. There is a way you can expose coordination with internal teams and colleagues without
"I've reached out to a colleague who has provided me with some additional context" or "this work requires some additional input from another team - I'm working to establish this and will get back to you with more details"
Neither of the above examples provide any more context on internal teammates or their organizations. However they do require additional work and a culture of customer support (which Larry Page was infamously against for years).
That’s an explanation, not a cop out. It’s saying it’s a problem with management’s incentives and presumably not easily corrected before they have a good CEO.
It’s not really a vulnerability in the sense that it leads to any sort of system compromise. It’s definitely a design flaw in whatever features they added to the OS, but not necessarily something that warrants a huge investigation.
Like it's a frustrating response to this valid bug report, but it's not really a security risk here, either. You don't actually bypass the lock screen or anything.
> Also other features are effected like kiosk mode etc
Is it? That's not demonstrated nor claimed in the linked article.
> I think it really is and could have serious safeguarding issues.
Elaborate. What's the security risk from your child using a browser after the parental control timeout expired? It's annoying that the automatic limits didn't fully happen, but data isn't compromised as a result, either.
That is still out of scope. And as parents you have to accept that you cannot keep control of everything. Your child might see stuff in the streets, might see stuff on someone else's device for which you weren't prepared, or find ways to circumvent any limitation you put to his life.
Oh sure, kind of like we adults cannot keep control of everything, like secret browser loopholes. Hey, one dev parent's scope is another dev parent's creep!
If I read that correctly, in the second case someone can bypass the pinning feature to access your personal information via the default browser's active sessions. That would be a compromise if that's the case.
They said 'go ask them' a out why they decided to close the issue (which also implies that someone went over this already), not 'go ask them because we simply don't care to look', as your comment seem to imply...
The 'we analyzed the issue and decided it won't be fix'
Is NOT the same as 'we don't cate about this, go talk to some other team and maybe they'll fix it'.
Deciding something is not a bug is not the same as just ignoring the bug and not fixing it
In this case it is - because someone outside the org - who has no responsibility for your company fixing it's stuff - is being asked to make sure the issue isn't lost.
Google lost out in this case - because an employee pushed responsibility onto an outside party.
It's a vulnerability. Report it to infosec. Even if they likely don't fix vulns themselves, they are the ones tracking SLAs for remediation and coordinating tickets, etc.
If your infosec team does not have a contact email (or preferably, several, e.g. "incident@", "security@", etc), slack/teams/webex channel, or escalation process for reporting security issues, that is decently well-known to employees, they are not doing their jobs well.
I cannot imagine working on an infosec team that doesn't make itself very visible, and very *available*, to other groups, and always position themselves as the 'catch-all' place for any security issues (which these bugs clearly are).
Send it to the CTO, the receptionist, or even HR. hell, CC them all with a note saying that it is unknown where to send it to so hoping someone will know where to forward it. It also sounds like your company needs better internal communication about communications within the company.
Google is so huge that it's extremely common to know you have an important bug for another team, but not to be able to route it to them because you can't find their team name.
Most teams have "code names" that have nothing to do with the public name of the project. For example, the parental controls team might be named "pigglewiggle-team" and the Android contacts team might be named "katniss-team" and their bug components might have similarly obscure code names. If you don't work with those teams frequently it can be really daunting to find.
Even when the bug components have hints that get you close to the right place, it's not unusual to learn that most of the engineers are busy working on the new version of the app that isn't released yet, and the old version of the app (the one with the bug) has been destaffed and bugs are supposed to be routed to some other random team that's literally never touched the code.
This giving codenames to projects thing seemed so endearing when I was working in a team and company of 10, but now that I’m working in a team of 100, and an org of 2000, it’s so extremely aggravating.
Any large bureaucracy seems to do that. Either you have a relatively flat org chart, but then teams have limited visibility past their "neighbors". Or you have a deeply nested hierarchy, where there's a clear path to route requests to any given place, but it takes ages because of all the red tape involved in escalating it sufficiently to route it properly (and then people try to avoid that hassle).
Try contacting the CTO in a company of 100k+ people. Your email probably goes to their junk folder. I worked in a similarly sized company and I didn't even know the name of the CTO, nor if there even was one.
In fact almost any staff member inside an organisation that receives a plausible vulnerability report should ensure it reaches the right people. It's not something you should shrug off.