Firefox (xpcshell) being a pain


Recommended Posts

So I'm trying to compile firefox 19, not a hard task you wouldn't think but it appears it is!

"/tmp/FFBUILD/src/mozilla-release/obj-x86_64-unknown-linux-gnu/dist/bin/xpcshell: symbol lookup error: /tmp/FFBUILD/src/mozilla-release/obj-x86_64-unknown-linux-gnu/dist/bin/xpcshell: undefined symbol: _Z21JS_SetContextCallbackP9JSRuntimePFiP9JSContextjE"

I have absolutley no idea what that means other than (I think) it's trying to link to some incredibly strange named function that obviously doesn't exist?

So errr, how the heck do I go about fixing this?

Link to comment
Share on other sites

That looks like a linker error. The linker is trying to resolve the location of a specific function that is used somewhere in the code but cannot find the function. Try greping the source for SetContextCallbackP9JSRuntimePFiP9JSContextjE to see where that function is defined and used.

Link to comment
Share on other sites

I started over again from scratch using gcc this time instead of clang and it's half way through so the results might chance later but right now...

grep -r "SetContextCallbackP9JSRuntimePFiP9JSContextjE" .

Binary file ./obj-x86_64-unknown-linux-gnu/ipc/testshell/XPCShellEnvironment.i_o matches

Binary file ./obj-x86_64-unknown-linux-gnu/js/xpconnect/src/XPCLocale.i_o matches

Binary file ./obj-x86_64-unknown-linux-gnu/js/xpconnect/src/XPCJSRuntime.i_o matches

Binary file ./obj-x86_64-unknown-linux-gnu/js/src/libjs_static.a matches

Binary file ./obj-x86_64-unknown-linux-gnu/js/src/jsapi.i_o matches

Binary file ./obj-x86_64-unknown-linux-gnu/js/src/shell/js matches

...And that's it :s

Link to comment
Share on other sites

Although Clang is generally an excellent compiler (I much prefer it to GCC when I'm trying to develop/debug a program due to its fantastic error messages and strict standards compliance), some complex programs which were developed to build with GCC simply won't build with it (yet). Some of those programs (including Firefox) even had to make changes to build with GCC 4.x because GCC 3.x readily accepted many language extensions that GCC 4 removed (or depreciated) to move closer to strict standards compliance. Although GCC 4.8 has much better ANSI C standard support than GCC 3.4 did, it is not perfect. Many non-standard features can still be optionally enabled by passing the compiler the right switches since those features can be very useful if you are not worried about inter-compiler support.

When I suggested greping for the function name, I intended you to search the source code not the build directory. (I was not clear about that. I apologize.) It is not especially useful to know which object files contain references to that function because you don't know which one contains the implementation. The source code will tell you the latter.

Link to comment
Share on other sites

I grep'd from the top level directory (including all binary and source folders), and GCC has just got the error too now but for this

/tmpFFBUILD/src/mozilla-release/obj-x86_64-unknown-linux-gnu/dist/bin/xpcshell: symbol lookup error: /tmp/FFBUILD/src/mozilla-release/obj-x86_64-unknown-linux-gnu/dist/bin/xpcshell: undefined symbol: __gcov_indirect_call_profiler

Link to comment
Share on other sites

Which distro and arch are you trying to build this on? I'm not sure exactly why you're getting that error, but I know from experience that the Iceweasel source has patches specifically to make Firefox compile on PowerPC and other architectures Mozilla does not support (but Debian does). Even if you are not using Debian, this might help. (Particularly take a look at patches contained in the Iceweasel 19.0-1 diff.)

Link to comment
Share on other sites

It's just plain firefox 19 on arch linux x86-64!

I found a thing on arch that says to build in a clean root which I'm trying now but it's a load of crap tbf, I've compiled firefox like this before and it's worked fine :s

Link to comment
Share on other sites

Great, xpcshell strikes again...

" adding: hyphenation/hyph_lt.dic (deflated 51%)

adding: hyphenation/hyph_nb.dic (deflated 52%)

adding: hyphenation/hyph_la.dic (deflated 59%)

adding: hyphenation/hyph_de-1901.dic (deflated 54%)

adding: hyphenation/hyph_fr.dic (deflated 60%)"

And xpcshell has been running for about an hour in this chroot using 100% cpu doing... Nothing!?

Link to comment
Share on other sites

I really don't have any specific experience developing or packaging Firefox beyond building it for Debian Squeeze PowerPC, and I have no more ideas for debugging the error you're encountering. I doubt you're going to get any more help here on Neowin considering how many people have contributed to this thread already. You should probably try asking on the Mozilla forums or asking the Arch Firefox maintainer. If you do find a solution though, I would be interested in knowing how you solved it.

Link to comment
Share on other sites

I've got another SCSI drive out and installed arch on it to try again... Something pretty interesting to note is that the current version of arch is NOTHING like what it is on previous installs, for example on all over arch installs on this and my other devices, I've got /dev/eth0 /dev/eth1, on here I've got some random name, /dev/enp7s2 and /dev/enp4s0... Really not sure why.

Anyway, whilst it's compiling, I'm noticing it's passing -O2 AND -O3 for all the files, optimisation level 2 and 3... Firstly I have no idea why it's passing -O3 unless mozilla have it do that because I don't have that set on the system at all, and secondly I thought that broke a lot of things, so now I'm thinking it's breaking the build.

Link to comment
Share on other sites

How ironic, it's compiled fine in a new install... I think upgrading arch has borked the whole system, no way should it stop firefox from compiling on 2 entirely different systems... Think I might be moving away from arch, the rolling release system just has way too many bugs and problems for my liking.

Ah well, trying a compile on this system using Clang/LLVM now and will upload AntiSocialFox 19 after it's finished and I go back to the other drive.

Link to comment
Share on other sites

Shouldn't compiling Firefox in a clean chroot have produced the same result as compiling it on a clean install? Not to be insulting, but are you sure you setup the chroot properly? I'm not sure whether Arch has a facility like this or not, but I often use pbuilder on Debian to make sure my packages build in a clean environment. I also configured a hook in my pbuilderrc to drop me to a shell prompt in the chroot in the event of a build failure so I can investigate it.

I have also read that -O3 should generally be avoided because the optimizations it makes are potentially unsafe. It can achieve much greater optimization in some cases, but code that works with -O3 on one platform (or compiler) is not guaranteed to work on another. On the other hand, -O2 is generally considered safe.

Link to comment
Share on other sites

The chroot compile gets to the point whereby it failed normally, and just sits there frozen with xpcshell running using 100% cpu but doing nothing for over 3 hours so each time I tried it I just ended up killing xpcshell and it failed. Chroot was setup fine. Being honest the problem with the chroot I think is down to either the chroot not emulating something right (like how if you try to compile things for ARM in an ARM chroot on an x86 PC all you end up with is broken binaries they don't run on the real hardware) or a problem with xpcshell or how it runs/is coded (it's some sort of javascript shell apparently).

Yeah I've no idea what was going on with -O3, it's mozilla that put them in though because the -O2 was coming from all my settings and later in the command lines/output you see mozilla mystically putting in -O3, but heck the compiled programs work fine now so all's good :)

Link to comment
Share on other sites

Have you tried setting up an ARM chroot on an AMD64 installation? I don't see how that would work since the instruction set is completely different. I can run an i386 chroot on my AMD64 system because the processor and Linux kernel both support support i386 and AMD64 natively. However, I can't run a Debian GNU/kFreeBSD chroot on my Debian GNU/Linux installation even if they both use the AMD64 instruction set because the kernels are binary-incompatible. The same logic applies to ARM: AMD64 Linux is binary-incompatible with ARM Linux.

By the way, did Firefox compile in your new installation with Clang or did you revert to GCC?

Link to comment
Share on other sites

Yes it's possible, you use the qemu binary and add it using binfsutils and it will run ARM binaries like they are native.

And nope, got to the same error in clang moaning about that function, I noticed there is an option for configure for adding LLVM hacks to get it to compile, can't remember if I tried it, but from reading up it only seemed to work on macs anyway.

Link to comment
Share on other sites

Now I understand how its possible to run an ARM chroot on AMD64, but I submit that using qemu is cheating! You're technically running a chroot, but qemu is translating the ARM instructions into your processor's native instructions, hence "cheating". That said, it sounds a lot like the method implemented by qemubuilder (which builds packages for PowerPC on my 2.4 GHz Core2Quad much faster than I can build them natively on my 1.5 GHz G4 - despite the fact that qemu is wasting cycles doing translation on the former but not the latter). In my experience the packages built with qemubuilder work properly when installed on their target architecture.

Link to comment
Share on other sites

There's 2 ways to build packages for different systems, one is using qemu and might work (just had a quick google and there's a bunch of people using qemu to compile MAME and whatnot for the raspberry pi, I was building kernels around 2 months after the RPi got released) but as qemu is an emulator, it's not guaranteed to ever work 100% like for example SNES emulators... The only accurate SNES emulator is bsnes!

The other method is doing a cross-compile using the GCC (or other compiler) tools compiled for your architecture but with all the code translation for the target system and not your processor (binaries are guaranteed to work this way as the compiler/assembler/linker is the same as if you were running it natively on the target system) and you can also do everything GCC can do such as distcc builds for faster compilation.

In terms of qemu you've also got the problem of the actual CPU architecture, e.g. the raspberry pi is an armv6h, h standing for hardware floating point, and from what I remember qemu emulates a generic ARM CPU not a specific type so if for exampleit emulated armv7, you'd be generating instructions that the armv6 isn't able to run. This isn't too much of a problem for other CPUs such as powerpcs or x86 CPUs for example, e.g. x86's got extra instruction sets added such as SSE1, SSE2, AES-on-chip, Intel TXT, etc. but these are not part of the core x86 instruction set (i686) and require special -mtune parameters when these features want to be used using GCC and other compilers.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.