To the best of my understanding it means that a system made by CGI for digital signing of documents (as in: you get something like a PDF from a government agency and need to digitally sign it and send it back) has had its source code and/or some data belonging to it leaked.
Skatteverket, the Swedish tax authority, has been quoted in media as confirming that they use CGI's system for digital document signing but that none of their data nor that of any citizens has been leaked.
"One of the government agencies that uses CGI’s services is the Swedish Tax Agency, which was notified of the incident by the company. However, according to the Swedish Tax Agency, its users have nothing to worry about.
“Neither our data nor our users’ data has been leaked. It is a service we use for e-signatures that has been affected, but there is no data from us or our users there,” says Peder Sjölander, IT Director at the Swedish Tax Agency."
So if no data was leaked from the tax agency or from the users, then the leaked "digital signing documents" must have belonged to the only remaining party, which is CGI, so perhaps they were just some marketing documents about the benefits of their digital signing service?
The original phrasing from the attacker, from the website that put the data up for download/sale, was ”documents (for electronic signing)” which implies that they’re documents that would be signed in said system. I would take all of this with a large helping of salt though. CGI claims it’s not real production data anyway; maybe it is and maybe it’s not.
The best case scenario is in line with what CGI claims: these are lorem ipsum fake docs from an old git repo for a test instance of the system.
If that is case, then it would have been wrong from the beginning for any government to keep hold of the private keys for the signature on my citizen card.
Because in that case they can sign documents on my behalf without my permission. In a court case, it would be near impossible for me to prove that the government gave my private key to someone else and that it wasn't me signing an incriminating document.
I apparently didn't phrase that very well. If what is the case? I was trying to ask which case was the case, not trying to claim that something specific was the case.
I'm familiar with electronic signatures, and I know what documents are, but I have never heard the phrase "electronic signing documents" and don't know what that is supposed to mean. What kind of documents? Documents about signing, documents that were signed, documents in the sense that files containing keys could be considered documents, or what?
Signed documents can be as simple as an ID of the transaction, a statement in text, PII data that identify what you sign, or a store of larger PDF files for download and verification. We do not know. I base this on how signing works technically in Sweden.
In Portugal we were early adopters for digital signatures on citizen cards.
You use the card reader, insert your gov-issued identification and can sign PDF papers which have legal validity since the private key from the citizen card was used.
Now imagine someone signing random legal documents with your ID for things like debts, opening companies or subscritions to whatever.
We might've lucked out here, there is some signature data on ID cards today and official _plans_ to make a government backed signing service, but practically _nobody_ uses them in practice to just revoking all those keys will be a minor issue.
Currently most Swede's use a private bank consortisum controlled ID solution for most logins and signatures.
The PR doesn't disclose that "an LLM did it", so maybe the project allowed a violation of their policy by mistake. I guess they could revert the commit if they happen to see the submitter's HN comment.
But search engines are not a good interface when you already know what you want and need to specify it exactly.
See for example the new Windows start menu compared to the old-school run dialog – if I directly run "notepad", then I get always Notepad; but if I search for "notepad" then, after quite a bit of chugging and loading and layout shifting, I might get Notepad or I might get something from Bing or something entirely different at different times.
Although PowerShell borrows the syntax, it (as usual!) completely screws up the semantics. The examples in the docs [1] show first setting descriptor 2 to descriptor 1 and then setting descriptor 1 to a newly opened file, which of course is backwards and doesn't give the intended result in Unix; e.g. their example 1:
dir C:\, fakepath 2>&1 > .\dir.log
Also, according to the same docs, the operators "now preserve the byte-stream data when redirecting output from a native command" starting with PowerShell 7.4, i.e. they presumably corrupted data in all previous versions, including version 5.1 that is still bundled with Windows. And it apparently still does so, mysteriously, "when redirecting stderr output to stdout".
IIRC PowerShell would convert your command's stream to your console encoding. I forget if this is according to how `chcp.com` was set or how `[Console]::OutputEncoding` was set (which is still a pain I feel in my bones for knowing today).
It's also not a file descriptor. It's a PowerShell Stream, of which there are five? you can redirect to that are similar to log levels.
Well... Right here on the the very first website Tim Berners-Lee talks about how to build interactive web applications (here called "gateways"), albeit server-side rather than client-side:
https://info.cern.ch/hypertext/WWW/FAQ/Server.html
Couldn't they simply switch to zip files? Those have an index and allow opening individual files within the archive without reading the whole thing.
Also, I don't understand how using XML makes for a brittle schema and how SQL would solve it. If clients choke on unexpected XML elements, they could also do a "SELECT *" in SQL and choke on unexpected columns. And the problem with people adding different attributes seems like just the thing XML namespaces was designed for.
It's a single XML file. Zip sounds like the worst of both worlds. You would need a new schema that had individual files at some level (probably at the "row level.") The article mentions SQLCipher which allows encrypting individual values separately with different keys. Using different keys for different parts of a kdbx sounds ridiculous, but I could totally imagine each row being encrypted with a compound key - a database-level key and a row-level key, or using PKI with a hardware token so that you don't need to decrypt the whole row to read a single field, and a passive observer with access to the machine's memory can't gain access to secrets the user didn't explicitly request.
ZIP files can have block-like relatives to the SQLite page. It could still be a single XML file and have piecewise encryption in a way that change saving doesn't require an entire file rewrite, just the blocks that changed and the updated "File Directory" at the end of the ZIP file.
Though there would be opportunity to use more of the ZIP "folder structure" especially for binary attachments and icons, it wouldn't necessarily be "required", especially not for a first pass.
(That said there are security benefits to whole file encryption over piecewise encryption and it should probably be an option whether or not you want in-place saves with piecewise encryption or whole file replacement with whole file encryption.)
A ZIP file with solid encryption (i.e., the archive is encrypted as a single whole) has all of the same tradeoffs as a KDBX file as far as incremental updates are concerned.
A ZIP file with incremental encryption (i.e., each file is individually encrypted as a separate item) has its own problems. Notably: the file names are exposed (though this can be mitigated), the file metadata is not authenticated, and the central directory is not authenticated. So sure, you can read that index, but you can't trust it, so what good is it doing? Also, to support incremental updates, you'd either have to keep all the old versions of a file around, or else remove them and end up rewriting most/all of the archive anyway. It's frankly just not a very good format.
Apparently "AI is speeding up the onboarding process", they say. But isn't that because the onboarding process is about learning, and by having an AI regurgitate the answers you can complete the process without learning anything, which might speed it up but completely defeats the purpose?
According to the article, onboarding speed is measured as “time to the 10th Pull Request (PR).”
As we have seen on public GitHub projects, LLMs have made it really easy to submit a large number of low-effort pull requests without having any understanding of a project.
Obviously, such a kind of higher onboarding speed is not necessarily good for an organization.
I think there's definite scope for that being true; not because you can start doing stuff before you understand it (you can), but because you can ask questions of a codebase your unfamiliar with to learn about it faster.
id guess the time til forst being able to make useful changes has dropped to near zero, but the time to get mastery of the code base has gone towards infinity.
is that mastery still useful as time goes on though? its always felt a bit like its unhealthy for code to have people with mastery on it. its a sign of a bad bus factor. every effort ive ever seen around code quality and documentation improvement has been to make that code mastery and full understanding irrelevant.
This has been my experience as a dev, and it always confuses me when people say they prefer to work at a “higher level”. The minutiae are often just as important as some of the higher level decisions. Not everything, but not an insignificant portion either. This applies to basic things like correctness, performance, and security - craft, style, and taste are not involved.
> this new method is possible to work because FreeBSD switched from Heimdal Kerberos implementation to MIT Kerberos in FreeBSD 15.0-RELEASE … and I am really glad that FreeBSD finally did it.
What was the problem with Heimdal? The FreeBSD wiki says they used an old version, but why not upgrade to a newer version of Heimdal instead of switching to an entirely different implementation?
Because we (Heimdal) need to make a release, darn it. I'm going to cut an 8.0 beta within a week or two.
Basically, an 8.0 release is super pent up -- years. It's got lots of very necessary stuff, including support for the extended GSS-API "cred store" APIs, which are very handy. Lots of iprop enhancements, "virtual service principal namespaces", "synthetic client principals", lots of PKINIT enhancements, modern public key cryptography (but not PQ), etc.
The issue is that the maintainers (myself included) have been busy with other things. But the pressure to do a release has ramped up significantly recently.
Also things like support for GSS-API pre-authentication mechanisms (so, you can use an arbitrary security mechanism such as EAP to authenticate yourself to the KDC), the new SAnon mechanism, pulling in some changes from Apple's fork, replacing builtin crypto with OpenSSL, etc. Lack of release has been typical OSS lack of resources: no one is paid to work on Heimdal full time.
This [0] may provide a hint. Heimdal was developed outside of the US and not subject to export restrictions, unlike MIT. So perhaps in the beginning it wasn’t the package of choice to begin with.
reply