Blog Page

Uncategorized

Fedora 38 To Prohibit Byte Swapped Xorg and Xwayland Clients … – Slashdot

Please create an account to participate in the Slashdot moderation system




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
That “bi-endianness” is not an at-will change but rather gets set at boot, and stays that way.
This sort of feature is low-level enough that it really needs to be part of the wire protocol testing —a facility so low-level everything uses it and once checked and verified, basically no possibility for abuse ought to exists since it becomes invisible to everything that builds on top— so if it’s “largely untested” that says oodles about developer ineptitude. Which we know exists, because the Xorg ki
No, PowerPC can run little Endian guest applications on a big Endian host OS, and vice versa – this is supported on AIX. In theory, SPARCv9 can, too, although I donâ(TM)t recall ever seeing the capability used in practice.
The reason why there is big-endian is because little-endian is stupid. Little-endian is a holdover from the days of 8-bit computers where the arithmetic was simpler with little endian. PowerPC is big-endian; it has a bi-endian option but it is not without it’s problems.
It’s bizarre that this is a problem. This endianness should be resolved long before the guts of the server or client see the data. If Most internet protocols deal with this issue well, it’s odd that Wayland is having trouble. Probably PC
The feature is done wrong then. Endianness should have been specified at the protocol level. Not doing this feels like they were optimizing at the wrong place.

It’s not a pain for Wayland nor is this about Wayland having trouble, this is Fedora seeing that the feature is basically non tested and also not really used so the easy option was to disable it

It’s not a pain for Wayland nor is this about Wayland having trouble, this is Fedora seeing that the feature is basically non tested and also not really used so the easy option was to disable it
By that standard they shouldn’t be shipping Wayland at all, then.

The reason why there is big-endian is because little-endian is stupid. Little-endian is a holdover from the days of 8-bit computers where the arithmetic was simpler with little endian. PowerPC is big-endian; it has a bi-endian option but it is not without it’s problems.

The reason why there is big-endian is because little-endian is stupid. Little-endian is a holdover from the days of 8-bit computers where the arithmetic was simpler with little endian. PowerPC is big-endian; it has a bi-endian option but it is not without it’s problems.
Little-endian is not stupid, it is the natural way to map characters onto integers in computer memory. Also, little-endian was invented by the 16-bit DEC PDP-11, at a time when IBM and DEC mainframes were all big-endian. I had a hard time getting my head around it, but when I did I realized that big-endian was the stupid one.
PDP-11 used “middle endian”, a mix. Little-endian in the 16 bit word but big-endian when adding 32-bit values. PDP used this to stay consistent with a Honeywell minicomputer, and they did this because they were inexpensive computers and the bigger mainframes just had a plain X-bit value that you read in one go over an X-bit bus. That is, on the mainframes you usually could not read at an address smaller than the word size. PDP-11 allowed byte addressing in RAM but this didn’t matter for most of the instructions, the real importance of endianness was when combining two 16-bit values with 32-bit operations.

PDP-11 used “middle endian”, a mix. Little-endian in the 16 bit word but big-endian when adding 32-bit values. PDP used this to stay consistent with a Honeywell minicomputer, and they did this because they were inexpensive computers and the bigger mainframes just had a plain X-bit value that you read in one go over an X-bit bus. That is, on the mainframes you usually could not read at an address smaller than the word size. PDP-11 allowed byte addressing in RAM but this didn’t matter for most of the instructions, the real importance of endianness was when combining two 16-bit values with 32-bit operations.

PDP-11 used “middle endian”, a mix. Little-endian in the 16 bit word but big-endian when adding 32-bit values. PDP used this to stay consistent with a Honeywell minicomputer, and they did this because they were inexpensive computers and the bigger mainframes just had a plain X-bit value that you read in one go over an X-bit bus. That is, on the mainframes you usually could not read at an address smaller than the word size. PDP-11 allowed byte addressing in RAM but this didn’t matter for most of the instructions, the real importance of endianness was when combining two 16-bit values with 32-bit operations.
The failure to do little-endian for 32-bit values was a mistake, probably caused by an engineer who didn’t understand little-endian. The mistake was corrected in the VAX, a 32-bit instruction set that was fully little-endian.
Communication between computers, then as now, was usually done on a serial link, one bit at a time. The teletype model 33, the original computer terminal, transmitted ASCII characters least significant bit first, and that became the standard for later terminals. If you lay the bits from such a terminal into memory as they arrive, the nattural way to do it is to start with the low-order bit of the first byte or word of the buffer, and proceed until you get to the end of the byte or word, then go on to the next higher memory address. What you get in memory isn’t right for a PDP-10 or Sytem/360, but is right for the PDP-11.
The only computer interface I am familiar with from that time which used parallel transmission was the IBM 2701, but the parallel feature was not used when communicating between IBM and DEC computers–we used modems which are inherently serial.
Little-endian is not natural. It’s a compromise. Computers prefer to read and process numbers with the least significant bits first, but most human languages don’t do that. So, little-endian does this weird thing where numbers are broken up into slices that are part LSB, part MSB, but kinda-not-sorta-really.
Yeah, it may work, but having grown up with big-endian 68000 processors, I still can’t get used to it. I thought about it for a while, but decided to use big-endian for my hobby CISC processor. It m

Little-endian is not natural. It’s a compromise. Computers prefer to read and process numbers with the least significant bits first, but most human languages don’t do that. So, little-endian does this weird thing where numbers are broken up into slices that are part LSB, part MSB, but kinda-not-sorta-really.

Yeah, it may work, but having grown up with big-endian 68000 processors, I still can’t get used to it. I thought about it for a while, but decided to use big-endian for my hobby CISC processor. It makes it easier to deal with extra instruction words without having to resort to scrambled immediates and other RISC-like nonsense.

Little-endian is not natural. It’s a compromise. Computers prefer to read and process numbers with the least significant bits first, but most human languages don’t do that. So, little-endian does this weird thing where numbers are broken up into slices that are part LSB, part MSB, but kinda-not-sorta-really.
Yeah, it may work, but having grown up with big-endian 68000 processors, I still can’t get used to it. I thought about it for a while, but decided to use big-endian for my hobby CISC processor. It makes it easier to deal with extra instruction words without having to resort to scrambled immediates and other RISC-like nonsense.
I also grew up with big-endian computers: the IBM 7090, DEC PDP-1, PDP-6/10, IBM System/360. When the PDP-11 came out I had a hard time understanding it, but when I saw the light I became a firm convert to little-endian.
We are accustomed to writing and speaking numbers with the high-order digits before the lower-order digits. Based on that, when we draw a computer word on paper, if it holds an integer we place the high-order bits to the left of the low-order bits. It therefore seems natural that, when th

The reason why there is big-endian is because little-endian is stupid. Little-endian is a holdover from the days of 8-bit computers where the arithmetic was simpler with little endian. PowerPC is big-endian; it has a bi-endian option but it is not without it’s problems.

The reason why there is big-endian is because little-endian is stupid. Little-endian is a holdover from the days of 8-bit computers where the arithmetic was simpler with little endian. PowerPC is big-endian; it has a bi-endian option but it is not without it’s problems.
The problem was named “endian” after Gulliver’s Travels where people fought wars over which end of an egg to eat out of. I used to believe one was better than the other, once I started taking computer engineering classes I learned it really doesn’t make much of a difference. Both are used all over the place and you just learn to deal with it.
Hopefully the user sees a decent error message rather than silent failure.
IBM’s Power9 servers have so many real cores that maybe it’ll come up.
Encouraging people to boycott a feature because you think it could have security bugs is just FUD.
If anything the fact it had vulnerabilities should tell you it’s now fixed.
Yeah, the whole security justification seems to be that one single advisory [x.org] mentions “SProc” a lot. The security bugs in that advisory did not actually have anything to do with byte-swapping; they were traditional integer overflows and the like. One might as well argue that GLX is a security risk on the same basis — it shows up an awful lot in that advisory as well.

Encouraging people to boycott a feature because you think it could have security bugs is just FUD.

Encouraging people to boycott a feature because you think it could have security bugs is just FUD.
How about boycotting the feature because it’s dumb. What sensible PROTOCOL is ever endian-sensitive to begin with? No communications protocol should ever be sensitive to the underlying physical construction of the CPU on one end of it. Of course this “feature” should be turned off, and X should just consolidate on one “endianness” or another in all its communications.
Don’t forget that X was designed in a time with a lot less processing power available, and it needs to be fairly low-latency and deal with a lot of small packets going back and forth. It’s not batch-processing, it is latency-sensitive. I recall X actually causing the NIC to make audible noise because of all the small packets reporting mouse movements. It was slight, but it was there, and no other protocol managed to ride the NIC like that. So you really do want to reduce the amount of work.
If you do the “proper” host-to-network, send, receive, and network-to-host translation dance, that’s two translation steps of which you can avoid at least one, possibly two, by client and host telling each other what endianness they have and if they differ, one of them agrees to do the translation.
So there are good reasons to essentially have two equivalent encodings that can be “unpacked” by simply overlaying with a struct definition and possibly doing some byte swapping along the way, but only once. These days you’d just shrug and say the customer needs to throw more hardware at the problem but really, that’s poor engineering practice. Done properly, this feature is invisible to upper layers, so the claim that it might cause security problems points at something else, developer ineptitude.
a) Its not “boycotting”. Its turning off by default because its a deprecated feature that stems from the mainframe days when endianness headaches where more common.
b) Its *entirely sensible* security best practices that if there is unused code that isn’t actively maintained because its not actively used.
c) Nobody loses. If you dont like it, turn it back on. Its not deleted, just turned off. But since almost nobody uses this feature because its *very* hard (maybe impossible?) to find an X terminal that suppo
In X terminology, the X server runs on the computer with graphics display, not the (historically anyway) big hardware with lots of CPU, memory, and storage AKA the “server”. When you started X on your little endian Windows 3 PC with VGA graphics card and telneted to the big endian Sun/SGI/HP/Digital/IBM AIX server to run X Emacs, X Emacs would be the client to the X server running on the PC.
Description has client and server backward
No it doesn’t. A server serves resources. A fileserver serves storage. A print server serves printers. A display server serves the display, and this is what X does.
Once upon a time, networks would sometimes have large compute servers which would serve up compute which were nicknamed “servers”. It would get confusing if you referred to both things as “servers” without the qualifying term. I think 90% of this was random X hate because no one ever objected to connectin
I don’t see anything in the description that contradicts that at all.
Yeah, I’m not running Fedora. Nor do I sign up for those distro surveys where I reported what I am running. (I have done these in the past, back when Slackware was a common distro).
Do I connect headless PowerPC (Mac G3 B&W) to a Raspberry Pi X11 and even run a little bit of GLX? Sure, all the time. Makes it convenient to test code without a theoretical KVM that can handle HDMI and VGA simultaneously. (you can to some extent, if you have an older monitor that switches between analog and digital DVI quick
Does it not still work, just with a performance hit to do some byte swapping? I seem to recall that PPC had some instructions to speed that up too.
That’s a fair question. If I read the article correctly it disabled the server side byte swap. I don’t think client libraries like XCB support any client side byte swap. So you’re out of luck there. It’s entirely possible the consequences are overblown just to get us to click on and comment on the article.
Do I connect headless PowerPC (Mac G3 B&W) to a Raspberry Pi X11 and even run a little bit of GLX? Sure, all the time. Makes it convenient to test code without a theoretical KVM that can handle HDMI and VGA simultaneously. (you can to some extent, if you have an older monitor that switches between analog and digital DVI quickly and some splitters for a DVI KVM. but in practice once you KVM breaks it’s hard to replace)
Maybe big endian is dead. It shouldn’t be but from a business point of view if aarch64,
Red Hat’s parent, IBM, sell bi-endian capable POWER hardware but Fedora have switched to ppcle.
Their employees who maintain the distro thus aren’t routinely testing big endian as a development activity.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
DuckDuckGo Will Block Google’s ‘Invasive, Annoying’ Sign-In Popups
Some Universities Are Now Restricting TikTok Access on Campus
If a camel is a horse designed by a committee, then a consensus forecast is a camel’s behind. — Edgar R. Fiedler

source