Mir: Ubuntu's New Display Server


Recommended Posts

Here's the first post from someone at canonical that actually does a better job detailing the reasons for mir:

https://plus.google....sts/PXc93m8nKwk

Now why the hell didn't they put THIS in their mir announcement, instead of all that ridiculous FUD against wayland? I swear, its like ubuntu tries to make these announcements in the most tactless ways imaginable, they really need to work on their PR.

I still have concerns about their plan to have mir fully working by 2014 though, I highly doubt that.

EDIT: Some comments from other developers in the comments on that link show there's a lot of inaccuracies in this more detailed explanation too. Canonical seems to keep digging themselves deeper into a hole with this.

Link to comment
Share on other sites

So instead of taking 1-3years to work together on one good solution, it's much better than it takes them ~10 years or more to make several different ones and agree on one of them that's "the best" and can be improved on by adding all the good features from the other projects, which takes another 1-3 years and results in the same final product you could have had 10 years ago by simply working together.

Your argument is a fallacy. And this unnecessary fragmentation is driving regular users and big corporate and government users away.

And the problem is the escalation and cascade effect of the problem. One main projects splits in 3. Another project does the same, and so on. As a user you may want a certain combination of all these fragmented projects as only one of each provides he best results for you. But despite the fact there's now 150 distros with different combination of all these fragmented core projects, none have the combination you need.

Link to comment
Share on other sites

So instead of taking 1-3years to work together on one good solution, it's much better than it takes them ~10 years or more to make several different ones and agree on one of them that's "the best" and can be improved on by adding all the good features from the other projects, which takes another 1-3 years and results in the same final product you could have had 10 years ago by simply working together.

Your argument is a fallacy. And this unnecessary fragmentation is driving regular users and big corporate and government users away.

And the problem is the escalation and cascade effect of the problem. One main projects splits in 3. Another project does the same, and so on. As a user you may want a certain combination of all these fragmented projects as only one of each provides he best results for you. But despite the fact there's now 150 distros with different combination of all these fragmented core projects, none have the combination you need.

Although you are correct that it will almost certainly take longer to develop a good solution if there are multiple competing products with different strengths and weaknesses, they will eventually converge into something very useful and technically excellent (or all starve - its a toss-up!). I admit that it often takes longer to develop a product in this manner than if there is a strong authoritarian leader who can force concord. That is probably the biggest weakness of most open-source projects. On the other hand, projects that do converge and succeed are often top-quality products that many people really like and want to use.

The new Windows 8 user interface (Metro - or Modern - or whatever-it-is-we're-calling-it-now) is an excellent example of an innovative product that came to market relatively quickly because of corporate backing and heavy oversight. However, it is an extremely controversial move which regular users have no say over. It happened. There is nothing we can do about it now. If an open-source project tried to push the same nonsense - such as GNOME 3 completely changing the user interface from GNOME 2 - users have a choice or rejecting the change (MATE) or adding desired functionality to the new solution (GNOME 3 Fallback and Cinnamon). The GNOME situation is an excellent example of the flexibility of open-source not afforded by closed-source products.

Although some distributions, such as Ubuntu, have strong opinions on what should go into the distro and what should not - especially by default - there are many distros that do not exercise such heavy-handed control. The plethora of distros available are proof that different people prefer different configurations, particularly since most of those distros differentiate themselves exclusively by user experience, not by overtly technical criteria. Community-driven "base" distributions, such as Arch and Debian, are excellent examples of projects that allow users to pick and choose the components they want to use - or package others if they are not already in the repository. Arch provides the AUR (Arch User Repository) where users can easily package software not already maintained in the main repository. Debian has a large developer community and a strong mentor program to help new maintainers learn how to package software and upload it to the main repository. In fact, many of the aforementioned distributions that differentiate themselves by user experience are based on Arch or Debian precisely because they allow the maintainer to choose which components to include in his distribution by default!

Link to comment
Share on other sites

Although you are correct that it will almost certainly take longer to develop a good solution if there are multiple competing products with different strengths and weaknesses, they will eventually converge into something very useful and technically excellent (or all starve - its a toss-up!). I admit that it often takes longer to develop a product in this manner than if there is a strong authoritarian leader who can force concord. That is probably the biggest weakness of most open-source projects. On the other hand, projects that do converge and succeed are often top-quality products that many people really like and want to use.

Mmm yeah... *looks at Linux marketshare* .... Yup...

Meanwhile the Linux version(s) of DWM still can't compete with the DWM desktop compositor, huge tearing problems, performance isn't even half that of DWM when quality is set to the same as the DWM required minimum and only setting. But hey, at least you can have windows that uselessly bounce around like jelly...

Also wi dos users have the choice to run a third part shell if they don't like metro, or the can use start8 or other silly workarounds.

Link to comment
Share on other sites

Mmm yeah... *looks at Linux marketshare* .... Yup...

Meanwhile the Linux version(s) of DWM still can't compete with the DWM desktop compositor, huge tearing problems, performance isn't even half that of DWM when quality is set to the same as the DWM required minimum and only setting. But hey, at least you can have windows that uselessly bounce around like jelly...

Also wi dos users have the choice to run a third part shell if they don't like metro, or the can use start8 or other silly workarounds.

Funny how valve gets much faster frame rates on the same PC by moving the program over from windows to linux and from directx to opengl, kinda shows how null and invalid your argument is.

Link to comment
Share on other sites

Funny how valve gets much faster frame rates on the same PC by moving the program over from windows to linux and from directx to opengl, kinda shows how null and invalid your argument is.

thats nothing new with that guy. he loves to bash linux and throw every **** at that OS. i have made several complaints to the moderators but obviously he has a free pass for whatever reason (maybe on someones payroll ... M$ maybe? )

Link to comment
Share on other sites

Funny how valve gets much faster frame rates on the same PC by moving the program over from windows to linux and from directx to opengl, kinda shows how null and invalid your argument is.

thats nothing new with that guy. he loves to bash linux and throw every **** at that OS. i have made several complaints to the moderators but obviously he has a free pass for whatever reason (maybe on someones payroll ... M$ maybe? )

Yup hawkman is always grumpy. Always has alway will be. Must be a N?rway thing. :laugh:

Link to comment
Share on other sites

Funny how valve gets much faster frame rates on the same PC by moving the program over from windows to linux and from directx to opengl, kinda shows how null and invalid your argument is.

Funny how Linux people bring that up but ignore the whole article where that one little line was mentioned. like the fact that they got the same imporvement on windows by doing the same optimizations, putting windows back on top. and the fact that's irrelevant as far as the problem I mentioned goes, in fact it proves my point, workign together towards one goal to improve something is better than 10 horses pulling in 10 directions.

and again, how does improved(yet unproved) gaming performance have ANYTHING to do, with the ****ty linux desktop compositors ?

oh right, you where trying to sidetrack off the point.

Link to comment
Share on other sites

thats nothing new with that guy. he loves to bash linux and throw every **** at that OS. i have made several complaints to the moderators but obviously he has a free pass for whatever reason (maybe on someones payroll ... M$ maybe? )

sorry but I think you don't understand the difference between differing views and healthy arguments and "that guy disagrees with me, and I (think I) am right, ban him" Meanwhile in your countless actual personal attacks on me, you have yet to back up any of your statements beyond "MS Sucks" and the ubiquitous "M$", personally I've never heard of this MicroDollar company and don't see what they have to do with tech or windows at any rate. oh and the FOSS standard "someone doesn't like Linux, they must be on MS payroll" :rolleyes: damn MS has a lot of employees, like 90% of the world...

And again you fail at your trolling, since I don't throw **** at every other OS, I in fact use Linux, I even use it daily, it doesn't mean I can't recognize it's faults and complain about them, just like I recognize faults in windows. Linux would be a much better place if a lot less of the developers and fans weren't like you, unable to see any faults in their system.

Link to comment
Share on other sites

All CEOs talk smack, the problem is that Linux is caught in a position where a lot of their competition that they talk smack about is their own people, and should be their partners to help work together.

Sure, CEOs shouldn't talk smack at all, but at leas pick your targets and keep your friends.

Link to comment
Share on other sites

The G+ post doesn't really make that much sense, it basically boils down to "We want Wayland, so we duplicated it with Mir"

The single difference seems to be in how they handle buffers (Apparently Wayland wants the app to supply the buffer, while Mir wants the server to provide it)

Edit: Claiming they couldn't use Wayland because Weston doesn't have unit tests is silly, the KDE and Gnome guys aren't using Weston at all.

Link to comment
Share on other sites

, I in fact use Linux, I even use it daily, it doesn't mean I can't recognize it's faults and complain about them, just like I recognize faults in windows

i expected this to come. and glad that you remember i use windows daily just like you obviously use linux daily. however, again, its obviously something different to complain here about linux (widely accepted and supported) than about windows (gets your warn level up)

why i have no understanding for this is, that with linux there should be really anything for everyone. i mean, kde, gnome, mate, gnome shell, unity, etc. etc. same goes with office packages and fonts and browsers and so on.

with windows your choice is minimal and mostly forced by microsoft. and still people love it and complain about linux. and the fact that you complain about something which offers lots of choices while i complain about a monopolistic trash os which has much more monetary support behind and still has basic flaws and errors never been therre in linux or already fixed doestn tell you something?

well for me it does.

Link to comment
Share on other sites

According to one of the Wayland developers, it already does server side buffer allocation when needed, so that isn't any reason to use Mir either. The only reason Mir seems to exist is so that Canonical can control it, there's no technical reasons why.

Link to comment
Share on other sites

i expected this to come. and glad that you remember i use windows daily just like you obviously use linux daily. however, again, its obviously something different to complain here about linux (widely accepted and supported) than about windows (gets your warn level up)

You don't get warned for complaining about windows, you get warned for how you do it.

while i complain about a monopolistic trash os

And you where wondering why you get warned...

According to one of the Wayland developers, it already does server side buffer allocation when needed, so that isn't any reason to use Mir either. The only reason Mir seems to exist is so that Canonical can control it, there's no technical reasons why.

I'm not sure they even know why. They have shown they don't understand the code behind a display server, and it's some advanced ****, so what makes them think they can make one, especially this fast, and why would they try. They don't have the knowledge or experience.

Link to comment
Share on other sites

@hawk

you ever meet mudslag? you would like him. he also loves to pull half-quotes out of the context. :/

Link to comment
Share on other sites

...

I'm not sure they even know why. They have shown they don't understand the code behind a display server, and it's some advanced ****, so what makes them think they can make one, especially this fast, and why would they try. They don't have the knowledge or experience.

I think it's interesting how a bunch of people who defend/back Mir talk about how Wayland is "slow", but that somehow Mir (with no experts on graphical interfaces or anything involved) can solve all the issues Wayland faced better/quicker.

Even if Mir isn't horrible, I just can't see it catching on anywhere else. Why would any distro pick Mir when you have Wayland/X11 to use? The Qt guys aren't doing to be maintaining the Mir backend for Canonical, neither will the GTK guys, etc. The only Mir server is Unity, while Wayland will be run by Kwin and Mutter (and others), both provided by and maintained by the same people who provide the frameworks. The only apps that would be using Mir would be Canonical provided Ubuntu apps, and I can see that just as a means of vendor lock in.

Link to comment
Share on other sites

The G+ post doesn't really make that much sense, it basically boils down to "We want Wayland, so we duplicated it with Mir"

The single difference seems to be in how they handle buffers (Apparently Wayland wants the app to supply the buffer, while Mir wants the server to provide it)

Edit: Claiming they couldn't use Wayland because Weston doesn't have unit tests is silly, the KDE and Gnome guys aren't using Weston at all.

Yeah, server-side allocation is really only necessary for some ARM SOC's where it performs better, other wise the client side allocation is better. There is definitely nothing in wayland that prevents server side buffer allocation and it would not have been hard to implement with wayland.

And yeah, I lol'd at "we don't want to use weston" as a point. No one is using weston, that's the point! Weston is just the reference compositor, no one but wayland developers are supposed to use it, and its expected that DE's use their own wayland-compositor. It seems time and time again canonical lacks even basic understanding about how wayland works, and that it what worries me most about mir, it really gives me the feeling that mir will be low quality.

Link to comment
Share on other sites

@hawk

you ever meet mudslag? you would like him. he also loves to pull half-quotes out of the context. :/

That was all the context that quote needed. the point was merely to point out something called hypocrisy.

Yeah, server-side allocation is really only necessary for some ARM SOC's where it performs better, other wise the client side allocation is better. There is definitely nothing in wayland that prevents server side buffer allocation and it would not have been hard to implement with wayland.

And yeah, I lol'd at "we don't want to use weston" as a point. No one is using weston, that's the point! Weston is just the reference compositor, no one but wayland developers are supposed to use it, and its expected that DE's use their own wayland-compositor. It seems time and time again canonical lacks even basic understanding about how wayland works, and that it what worries me most about mir, it really gives me the feeling that mir will be low quality.

I'd be surprised if it even gets far enough to get to low quality, seems optimistic. They don't have the knowledge or experience to make Mir, so how wuld they even make it, low quality or not. they're going to crash into a wall hard.

Link to comment
Share on other sites

I'd be surprised if it even gets far enough to get to low quality, seems optimistic. They don't have the knowledge or experience to make Mir, so how wuld they even make it, low quality or not. they're going to crash into a wall hard.

I'm pretty sceptical as well, but that doesn't seem to be out-of-line with most of the reactions to Mir that I've seen so far. The sole exception, besides Mark Shuttleworth and other Canonical employees of course, has been Chris and Matt's reaction to the Mir announcement on LAS. They are optimistic that Mir will succeed and become Canonical's "big contribution" back to the community. I agree more with the Wayland developers who have put forth solid, technical arguments for their platform rather than the fairly weak (non-existent?) technical arguments for Mir.

This thread has been quite interesting because it seems that Canonical keeps shoving their foot further into their mouth every time they speak on the subject of Mir. Only time will tell if their experiment succeeds or not. Considering our harsh predictions, this might be an interesting thread to look at in retrospect in a year or two.

Link to comment
Share on other sites

Just an off topic question for you since you seem to know a lot about all this: When is wayland soposed to go mainstream? Is it still up in the air?

There's 2 main things holding back Wayland now, drivers (Which Mir is blocked on as well) and having actual display servers (Wayland being primarily a protocol).

The driver issue is because historically the X server handled graphical drivers, while the kernel just provided a simple framebuffer for boot/non-X terminals. A while back they decided they wanted to take that off X and move it into the kernel, where it could provide a more consistent behavior (e.g. when switching terminals with a pure X setup, the screen flickers and resets each time because it switches from X to kernel framebuffer and back, with a "Kernel Mode Setting" setup the screen just redraws and optionally changes resolution, much nicer) Then you also have things like GEM (API for managing memory) and EGL (Cross-platform binding for OpenGL) that unify a lot of things and provide a consistent base (Using EGL helps on mobile, which is why basically everybody wants to use it)

Nobody wants to use Weston (The reference server from the Wayland project) as their main server, what we're waiting for is a DE like KDE or Gnome to provide a Wayland backed compositor. They're working on it but it is a fair bit of effort (They'd have to provide a Wayland compositor with X11 fallback and their normal X11 backed compositor for quite a while until drivers clear up).

Link to comment
Share on other sites

The_Decryptor, you've had some very informative posts in this thread. Thanks for the excellent technical information. I really appreciate it!

Link to comment
Share on other sites

There's 2 main things holding back Wayland now, drivers (Which Mir is blocked on as well) and having actual display servers (Wayland being primarily a protocol).

The driver issue is because historically the X server handled graphical drivers, while the kernel just provided a simple framebuffer for boot/non-X terminals. A while back they decided they wanted to take that off X and move it into the kernel, where it could provide a more consistent behavior (e.g. when switching terminals with a pure X setup, the screen flickers and resets each time because it switches from X to kernel framebuffer and back, with a "Kernel Mode Setting" setup the screen just redraws and optionally changes resolution, much nicer) Then you also have things like GEM (API for managing memory) and EGL (Cross-platform binding for OpenGL) that unify a lot of things and provide a consistent base (Using EGL helps on mobile, which is why basically everybody wants to use it)

Nobody wants to use Weston (The reference server from the Wayland project) as their main server, what we're waiting for is a DE like KDE or Gnome to provide a Wayland backed compositor. They're working on it but it is a fair bit of effort (They'd have to provide a Wayland compositor with X11 fallback and their normal X11 backed compositor for quite a while until drivers clear up).

They moved display drivers into the kernel ? does this mean that while MS moved the driver out of kernel space and into user space to increase stability and less chance of cascade failures, linux did the opposite and moved the driver into the kernel... or is this a separate thing ?

Link to comment
Share on other sites

This topic is now closed to further replies.