Note that we have four generally-available "clusters," faculty, ilab, grad, and research. The sections below will describe each. Each cluster has its own home directories. So if you have access to both ilab and grad, you won't be able to see your ilab home directory from the grad systems and visa versa. There is a common file system shared by all clusters, /common/users. Your directory on it is /common/users/NETID. Quotas on the clusters are a few GB. Quotas on /common/users are 100 GB.
IMPORTANT: CS Linux System has enforced limitations that all users should be aware of. Make sure to check the page before running a big job.
Faculty offices, plus some servers. There are two servers, constance.cs and porthos.cs. However we recommend logging in to faculty.cs.rutgers.edu. That will give you either constance or porthos, randomly. If one is down it will give you the one that is up. (There are also some old Sparc systems. They will be decomissioned sometime during 2018-19.)
These are not very powerful systems. They're intended primarily to keep records in an environment with no students. Serious computing is normally done on the research cluster.
- Faculty Machines
A set of three large servers (including 1080Ti GPUs and an instructional Hadoop cluster), and the Ilab (2nd floor hill) and grad student offices. Most students access these systems remotely, by ssh or XtoGo. Please use the alias ilab.cs.rutgers.edu, which will give you one of the three large servers. You can also look at the usage graphs to find a desktop system that isn't in heavy use.
Servers for use by faculty and grad students working on research projects. Faculty can authorize students to have access. The major system here is aurora, a large mult-core system with SSD storage. Older systems tall3 and tall4 are also available.
Many of our systems have GPUs:
30 of the Ilab systems have Radeon 8830
The 3 ilab servers each have 8 1080Ti GPUs
atlas: K40, QuadroK4000
cray1: 2 K80
k40c-1: 2 K40
gpu: K40, Titan
CBIM has a set of servers with large memory and GPUs. Those outside CBIM need permission from Dimitri Metaxas to use them, but that is generally available for CS researchers.
12 Dell systems with 4 K80 each, 512 G memory, 2 SSD
3 HP systems, with 2 K80, 256 G memory, 13 TB disk
High-performance systems: OARC
OARC is a University group that provides high-performance computing. Computer science in general doesn't have a conventional HPC cluster. We concentrate on GPUs and more specialized hardware. For large-scale HPC, OARC is the best source. They have a large cluster, Amarel. It is intended as a "condo" cluster. I.e. grants buy nodes, and are guaranteed at least as much capacity as they purchased. The cost is matched by the University. Howver some capacity is available for those who haven't bought into the system, particularly for course work and student use.
For more information see the OARC web site.
Some of OARCs nodes have GPUs, typically Nvidia.
LCSR can provide virtual machines, both for researchers and for use by classes. To request a system, please send mail to email@example.com.
Researchers commonly use VMs for web servers and other support services. We normally put those VMs on the same servers used for LCSR infrastructure.
For course use, we talk with the instructor to find out the configuration and software needed, then create one VM per user. We can also create VMs for grad students. These VMs are placed on one of two large (1 TB each) VM servers purchased specifically for instruction.