[ Notes ] [ Table of Contents ] [ Acknowledgements ]
This FAQ is intended to forestall the repetitive questions on the Beowulf mailing list. Corrections welcomed. All wrongs reversed. Dates of the form [1999-05-13] indicate the date an entry was last edited, not the date what it describes was last updated. [1999-05-13]
This section takes five minutes to read; please read it before posting!
Robert G. Brown, Greg Lindahl, Forrest Hoffman, and Putchong Uthayopas contributed valuable information to this FAQ.
Kragen Sitaker <kragen@pobox.com> sort of edits it and wrote some of the answers. It's his fault it's so disorganized and out of date.
supercomputer.org is responsible for
the hack-n-slash conversion of the FAQ text to HTML, although Kragen
made some edits afterwards.
You have everyone's
encouragement to do a better job.
If you want longer answers, see the long answers.
1. What's a Beowulf? [1999-05-13]
It's a kind of high-performance massively parallel computer built primarily out of commodity hardware components, running a free-software operating system like Linux or FreeBSD, interconnected by a private high-speed network. It consists of a cluster of PCs or workstations dedicated to running high-performance computing tasks. The nodes in the cluster don't sit on people's desks; they are dedicated to running cluster jobs. It is usually connected to the outside world through only a single node.
Some Linux clusters are built for reliability instead of speed. These are not Beowulfs.
2. Where can I get the Beowulf software? [1999-05-13]
There isn't a software package called "Beowulf". There are, however, several pieces of software many people have found useful for building Beowulfs. None of them are essential. They include MPICH, LAM, PVM, the Linux kernel, the channel-bonding patch to the Linux kernel (which lets you 'bond' multiple Ethernet interfaces into a faster 'virtual' Ethernet interface) and the global pid space patch for the Linux kernel (which, as I understand it, lets you see all the processes on your Beowulf with ps, and maybe kill etc. them), DIPC (which lets you use sysv shared memory and semaphores and message queues transparently across a cluster). [Additions? URLs?]
3. Can I take my software and run it on a Beowulf and have it go faster? [1999-05-13]
Maybe, if you put some work into it. You need to split it into parallel tasks that communicate using MPI or PVM or network sockets or SysV IPC. Then you need to recompile it.
Or, as Greg Lindahl points out, if you just want to run the same program a few thousand times with different input files, a shell script will suffice.
As Christopher Bohn points out, even multi-threaded software won't automatically get a speedup; multi-threaded software assumes shared-memory. There are some distributed shared memory packages under development (DIPC, Mosix, ...), but the memory access patterns in software written for an SMP machine could potentially result in a loss of performance on a DSM machine.
4. PVM? MPI? Huh? [1999-05-13]
PVM and MPI are software systems that allow you to write message-passing parallel programs that run on a cluster, in Fortran and C. PVM used to be defacto standard until MPI appeared. But PVM is still widely used and really good. MPI (Message Passing Interface) is a defacto standard for portable message-passing parallel programs standardized by the MPI Forum and available on all massively-parallel supercomputers.
More information can be found in the PVM and MPI FAQs.
No. There is this thing called BERT from plogic.com which will help you manually parallelize your Fortran code. And NAG's and Portland Group's Fortran compilers can also build parallel versions of your Fortran code, given some hints from you (in the form of HPF and OpenMP (?) directives). These versions may not run any faster than the non-parallel versions.
6. Why do people use Beowulfs? [1999-05-13]
Either because they think they're cool or because they get supercomputer performance on some problems for a third to a tenth the price of a traditional supercomputer.
7. Does anyone have a database that will run faster on a Beowulf than on a single-node machine? [1999-05-13]
No. Oracle and Informix have databases that might do this someday, but they don't yet do it on Linux.
8. Do people use keyboard-video-mouse switches? [1999-05-13]
Most people don't because they don't need them. Since they're running Linux, they can just telnet to any machine anyway unless it's broken. Lots of Beowulfs don't even have video cards in every node. Console access is generally only needed when the box is so broken it won't boot.
Some people use serial ports instead even for this.
9. Who should I listen to and who's a bozo? [1999-05-13]
I don't know who's a bozo. Maybe me. Don Becker, Walter B. Ligon, Putchong Uthayopas, Christopher Bohn, Greg Lindahl, Doug Eadline, Eugene Leitl, Gerry Creager, and William Rankin are generally thoughtful and well-informed, as well as frequently willing to help. Probably other people in this category too.
Robert G. Brown claims to be a bozo, but I don't believe him, even though he showed me his clown face. Rob Nelson also claims to be a bozo, but I think he is mistaken.
10. Does anyone have a Linux compiler that recognizes bits of code that could be optimized with KNI, 3DNow!, and MMX instructions? [1999-05-13]
No. Well, PentiumGCC has some support for this.
11. Should I build a cluster of these 100 386s? [1999-05-13]
If it's OK with you that it'll be slower than a single Celeron-333 machine, sure. Great way to learn.
12. Do I need to run Red Hat? [1999-05-13]
No. Indeed, the original Beowulf ran Slackware.
13. I'm using the Extreme Linux CD . . . [1999-05-13]
Don't -- it's way out of date.
14. Does Beowulf need glibc? [1999-05-13]
No. But if you want to run a particular application on a libc5-based beowulf, make sure it compiles and works with libc5. Similarly if you want to run a particular application on a glibc-based beowulf, make sure it compiles and works with glibc.
It is not recommended to configure different nodes differently in software; that's a headache.
15. What compilers are there? [1999-05-13]
gcc family, Portland Group, KAI, Fujitsu, Absoft, PentiumGCC, NAG. Compaq is about to beta AlphaLinux compilers which are reputedly excellent, and some people already compile their applications under Digital Unix and run them on AlphaLinux.
16. What's the most important: CPU speed, memory speed, memory size, cache size, disk speed, disk size, or network bandwidth? Should I use dual-CPU machines? Should I use Alphas, PowerPCs, ARMs, or x86s? Should I use Xeons? Should I use Fast Ethernet, Gigabit Ethernet, Myrinet, SCI, FDDI? Should I use Ethernet switches or hubs? [1999-05-13]
IT ALL DEPENDS ON YOUR APPLICATION!!!
Benchmark, profile, find the bottleneck, fix it, repeat.
Some people have reported that dual-CPU machines scale better than single-CPU machines because your computation can run uninterrupted on one CPU while the other CPU handles all the network interrupts.
17. Can I make a Beowulf out of different kinds of machines -- single-processor, dual-processor, 200MHz, 400MHz, etc.? [1999-05-13]
Sure. Splitting up your application optimally gets a little harder but it's not infeasible.
18. Where to go for more information? [1999-05-13]
The "Long answers" section of this FAQ
http://beowulf.org/
http://beowulf-underground.org/
http://beowulf.gsfc.nasa.gov/
(currently the same as http://beowulf.org/)
http://www.extremelinux.org/
http://www.xtreme-machines.com/x-links.html
The beowulf@beowulf.gsfc.nasa.gov mailing list
The "Supplementary information and resources" section of this FAQ
19. Is there a step by step guide to build a Beowulf? Is there a HOWTO? [1999-05-13]
Look at: http://www.xtreme-machines.com/x-cluster-qs.html This document will get you going. See also the docs in the "Docs" section of the "Supplementary information and resources" section of this FAQ.
Is there a compiler that will automatically parallelize my code for a Beowulf, like SGI's compilers? [1999-05-13]
Robert G. Brown writes:
With a few exceptions where a tool like BERT can tell you where and how to parallelize or an obvious routine is called with a plug-in parallel version, it is highly nontrivial to parallelize code. This is simply because your program isn't usually aware of dependencies and time orderings, and it is VERY difficult to make a truly reliable tool to unravel everything. With a pointer-based language like C it is all but impossible.
A second problem (aside from determining what in your code can safely be parallelized) is determining what can SANELY be parallelized. Code that will run efficiently on one parallel architecture may run slower than single-threaded code on another.
A third problem is to determine the ARRANGEMENT of your code that runs most efficently on whatever architecture you have available (beowulf, cluster, or otherwise). Sometimes code that on the surface of things runs inefficiently can be rearranged to run efficiently. However, this rearrangement is not usually obvious or intuitive to somebody who writes serial von Neumann code and is usually nothing at all like the original serial code one wishes to parallelize.
The proper answer to your question is therefore: "No" it is not essential to use PVM or MPI -- one can use raw sockets on the "do it all yourself" end or NFS on the "all I know how to do or care to learn is open and write to a file" end with perhaps some ground in between. However, the answer is ALSO "No" it is almost certainly not enough to just recompile even with the smartest of compilers. The problem is too complex to fully automate, and the underlying serial code being parallelized may need complete rearrangement and not just a plug-in routine.
See http://noel.feld.cvut.cz/magi/soft.html for more.
Several pieces of software: [1999-05-13]