Thought I'd write an #introduction for all those #newhere from the Musk exodus:
I'm an analytical chemist. I've hosted my own #Friendica node (instance) at my home since 2018, shortly after the Cambridge Analytica scandal. From there, I got bit by the #selfhosting bug and am now hosting my own #opensource replacements for many FAANG services.
Welcome all to the #fediverse ! If you are thinking about self-hosting and have questions, would be happy to help as able!
#introductions
like this
reshared this
'xz utils' Software Backdoor Uncovered in Years-Long Hacking Plot
'xz utils' Software Backdoor Uncovered in Years-Long Hacking Plot - UNICORN RIOT
A fascinating but ominous software story dropped on Friday: a widely used file compression software package called “xz utils” has a cleverly embedded system for backdooring shell login connections, and it’s unclear how far this dangerous package got …frodo (UNICORN RIOT)
like this
reshared this
Details of the @pixelfed security vulnerability from February 10th have now been published.
If you are still using a vulnerable version (39.5% of pixelfed instances as of today), then you should update immediately, otherwise someone may just be able to turn off federation for your instance.
github.com/pixelfed/pixelfed/s…
#pixelfed #security #fediverse
Insufficient authorization allowing elevated access to resources
### Summary When processing requests authorization was improperly and insufficiently checked, allowing attackers to access far more functionality than users intended, including to the administra...GitHub
JB Carroll reshared this.
JB Carroll likes this.
reshared this
like this
If we're in a simulation, then this would be a great troll by the admin. 🙂
like this
Susan ✶✶✶✶ likes this.
Interned String Buffer Full
I switched my fpm over to 8.0 on Friendica, the same as my Nextcloud instance. Ever since I've been getting a notification about my opcache.interned_strings_buffer
becoming full. I've increased the memory in php.ini
, but it seems like it almost immediately fills up no matter the memory I give it (I have tons of RAM so can experiment). I
Seems like it caches one string to be applied over and over again to improve performance. I wonder if there are some strings generated in Friendica that are always unique and therefore always would fill up the buffer.
Doesn't seem like it's affecting anything yet, but didn't know if that's a bug or a feature.
JB Carroll likes this.
Hypolite Petovan likes this.
Apparently sharing from Twitter is getting to be broken. Reminds me i still need to leave it. 😅🙃
Tech companies are ruining their apps, websites, internet
Google, Amazon, Meta, and other big tech companies are making their core products worse and ruining everything from apps to the internet.Ed Zitron (Insider)
Adam Lui :verified: likes this.
like this
like this
alysonsee (Fca) likes this.
@alysonsee (Fca) the repair added some excitement yesterday. The generator they were using was too close to the house and set off our CO detector, which correspondingly made our security company send the fire dept. out. My wife who WFH did not get much accomplished yesterday...
Thank goodness we don't repair our basement every day! 😅
alysonsee (Fca) likes this.
like this
Apple Sued for Allegedly Deceiving Users With Privacy Settings After Gizmodo Story
Researchers found that Apple collects iPhone data even when the company's own iPhone Analytics setting explicitly promises not to.Thomas Germain (Gizmodo)
like this
@Steven Brady Thanks! Back up and raring to go. Just changed the docker image back to 14.5 for postgress, did another coker-compose pull and up, and we're back.
I would like to switch to 15 someday, but seems less than straightforward, so will do that when I have some free time. 🙂
tal
in reply to Five • • •Looking forward, I'm still more-worried about the fact that state-backed threat actors are targeting open source projects via this social engineering route than the technical issues.
I think that the technical issues that the attacker used can be addressed to at least some degree.
Maybe it makes sense to have a small number of projects that are considered "security-critical" and then require that they only rely on other projects that are also security-critical. That's not a magic fix, but it maybe tamps down on the damage a supply-chain attack could cause. Still...my suspicion is that if an attacker could get code into something like xz, they could probably ultimately, even with only user-level privileges, figure out ways to escalate to control of a system. I mean, all you need is a user with admin privileges to run something as a user anywhere with their account. Maybe Linux and some other software projects just fundamentally don't have enough isolation. That is, maybe the typical software package should be expected to run in a sandbox, kind of the way smartphone software or video game console software does. That doesn't solve everything, but it at least reduces the attack surface.
But the social side of this is a pain. We don't want to break down the system of trust that lets open-source work well more than is necessary...but clearly, there are people being attacked by people who have a lot of time to spend on putting together tactics to attack them. I'm not sure that your typical open-source maintainer -- health issues or no -- can realistically constantly be on their guard against coordinated social engineering attacks.
The attacker came via a VPN (well, unless they messed up) and had no history. The (probable) sockpuppets also had no history. It might be a good idea to look for people entering open source projects who have no history and are only visible from a VPN...but my guess is that if we rely on reputation more, attackers will just seek to subvert that as well. In this case, they probably committed non-malicious commits for the purpose of building reputation for years. If you're willing to put three years into building reputation on a given project, I imagine that you can do something similar to have an account lying in wait for the next open source project to attack. And realistically, my guess is that if we trust non-VPN machines, a state-backed attacker could get ahold of one...it's maybe more convenient for them to bounce through a VPN. It's not something that they absolutely have to do.
But without some way to help flag potential attackers, it just seems really problematic from a social standpoint. I mean, it's a lot harder to run an open-source project if one is constantly having to think "okay, has this person just spent the past three years just building reputation so that they can go bad on me, along with a supporting host of bogus other accounts?" I'm not sure that it's possible, even for really paranoid people.
like this
wargreymon2023, JackGreenEarth, joelfromaus, Matt/D, randomname01, Uninvited Guest, OpenPassageways, smeg and Mars like this.
jarfil
in reply to tal • • •XKCD 1200
Qubes OS
LinuxCon + CloudOpen Europe 2014 - Qubes OS - Joanna Rutkowska
It's been over 10 years already, the desktop is only timidly adding containers, disposable VMs, per-program access permissions, and all that.
Qubes OS: A reasonably secure operating system
Qubes OSlike this
Sonori, JackGreenEarth and smeg like this.
DavidGarcia
in reply to jarfil • • •Five likes this.
tal
in reply to jarfil • • •Some of it is that a lot of desktop software paradigms weren't built to operate in that kind of environment, and you can't just break backwards compatibility without enormous costs.
Wayland's been banging on that, but there's a lot to change.
Like, in a traditional desktop environment, the clipboard is designed so that software packages can query its contents, rather than having the contents pushed to it. That lets software snoop on the clipboard.
What's on the screen and a lot of system state like keys that are down and where the mouse pointer is and so forth wasn't treated as information that needed to be kept private from an application.
Access to input hardware like controllers aren't linked to any concept of "focus" or "visibility" in a windowing system. That may-or-may-not matter much for a traditional game controller (well, unless you're using some system where one inputs a password using a controller), but modern ones have things like microphones. Hell, access to microphones and cameras in general on laptops isn't normally restricted on a per-app basis for desktop software. From microphone access alone, you can extract keystrokes.
I don't think that there's a great way to run isolated game-level 3d graphics in a VM unless you're gonna have separate hardware.
Something that I've wondered about is potential vulnerability via Steam. None of the software there is isolated in a "this might be malicious" sense -- not from the rest of the system, not from other software sold via Steam. And Steam is used to distribute free software...I haven't looked into it, but I don't think that the bar to get something into Steam is likely super high. And then consider that there are free-to-play games that have to make money however they can, and some of that is going to be selling data, and some of how they do that may be to just offer to run whatever libraries with their game the highest bidder offers. How secure are those supply chains? And on Steam, most of the software is closed source, which makes inspecting what's going on harder. And that's before we even get to mods and stuff like that, which are from all over the place.
I mean, let's say that random library from ad company used by a free-to-play game is sending up the identity of the user on the computer. It has some functionality that slurps in a payload from the network telling it to grab credentials off the existing system, and does so for ten critical users. Would anyone notice? I have a really hard time believing that there'd be any way to pick up on that. Even if you wanted to, you can't isolate many of these games from the network without breaking their functionality, and there's no mechanism in in place today isolating them from the user's storage or other identity information.
I own IL-2 Sturmovik: 1946. It's published and developed out of Russia, and the publisher, 1C, has apparently even been sanctioned as part of general sanctions against Russia. Russia is at war with Ukraine, and we in the US are supplying Ukraine. 1C runs a lot of software on user computers and can push updates to it. If the Russian authorities come knocking on 1C's door and want a change made to some library, keeping in mind 1C's position, are they going to say "no"? Keep in mind that what they say may determine whether the company survives an already-difficult environment, and that they may have no idea the full extent of what the state has going on. Now, okay, sure, probably -- hopefully -- there aren't US military people or defense contractors running IL-2 Sturmovik directly on critical systems. But...let's say that they run it at home. How carefully do they isolate their credentials and home information on that system? Does their home machine ever VPN in to work? Is there personal information -- such as access to personal email accounts -- that could be used for kompromat on such systems?
I've managed to get some Ren'Py software (no 3d requirements, normally limited access to input hardware required, one common codebase for most functionality, can generally use one's local Ren'Py engine running games instead of using the binaries provided, all favorable characteristics for sandbox) running in firejail (and in the process, discovered that one of the games I had was talking to a chat channel...this was described in the source as reporting numbers of users, and the game is a noncommercial effort, but chat channels have been used for commanding botnets before, and even if it's not malicious, if it can do that without attracting attention, I'd very much expect that malicious software could do so). That is about the extent of my attempts to really sandbox games, and even with that very limited and superficial effort, I already ran into something that I'd have some security concerns about. My guess is that there are a lot of holes out there, even if unintentional.
As things stand, Valve and similar app store operators have no incentive to isolate what they distribute, so if they do so, it's out of some kind of general sense of responsibility to users. Users generally don't have the technical expertise to understand what the security implications of Valve's decisions are, so they can't take that into account in purchasing decisions. We could mandate something like strict liability to Valve and other app store vendors or maybe OS vendors in the event of compromise -- that'd probably make them introduce isolation for software that they distribute. But there'd be some real costs to that. It'd make games more expensive. It might make it harder for smaller "app stores" like gog.com, itch.io, or Lutris to operate. I use Debian. Debian doesn't cost anything, and if you put the Debian project in the position where it may be legally liable, they're gonna have to charge for their OS to cover those costs. With charging probably comes DRM. With DRM probably comes restrictions on what one can do with software, which smashes into problems with open-source software. It's definitely a problem.
responsibility for consequences from activity despite absence of fault or criminal intent
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)like this
JackGreenEarth, Five and jarfil like this.
jmcs
in reply to tal • • •like this
rozwud, Sonori, bobburger, jlow (he/him), 🌸ミッコ🌸, JackGreenEarth and randomname01 like this.
tal
in reply to jmcs • • •Yeah, supply chain attacks can happen. There was that infamous SolarWinds supply chain attack recently. But I think that there are some important mitigating factors there.
That is, I think that this is going to be specially a challenge for the open-source world, as the attacks are targeting some things that the open-source community is notable for -- border-agnosticism, a relatively-low bar to join a project, and often not a lot of personal identity validation.
Yeah, that's kinda what I was thinking, but you put it more-frankly.
It seems like there's a lot of potential for this to be corrosive to the community.
like this
Five, randomname01 and smeg like this.
leanleft
in reply to Five • • •i view this as potentially very well funded governments vs ordinary people. we never stood a chance.
like this
wargreymon2023, 𝒍𝒆𝒎𝒂𝒏𝒏 and GlennicusM like this.
4dpuzzle
in reply to leanleft • • •That suggestion is because the attack took years of ground work, psyops, multiple disciplines and several levels of obfuscations. It needs the kind of effort that only a well paid and dedicated team can pull off. But that need not necessarily be a state actor. It could also be some spying/malware company (like NSO), any of the big corporates or a criminal group with lots of money.
But don't lose hope. All it took to uncover all of that was just one engineer who was annoyed by SSH slowing down from 0.3s to 0.8s. The effort needed to uncover it is only a fraction of what's needed to hide it. This is also a vindication of the FOSS philosophy. Imagine uncovering this if the source wasn't available.
like this
StorageAware, Axolotling, Sonori, bobburger, jlow (he/him), JackGreenEarth, Matt/D, 𝒍𝒆𝒎𝒂𝒏𝒏, Arcka, GlennicusM, Adanisi, Kaijobu, Five, randomname01, smeg and UNIX84 like this.
esaru
in reply to Five • • •like this
smeg and Five like this.
esaru
in reply to Five • • •