VIP visits in building 513 could be organised as follows: The balcony on the 1st floor with a view of the main room, where you could talk about:

Processing power and disk capacity



Network core



Openlab Project

The exhibition area on the ground floor, where you could talk about:

Operators room



On-line WLCG Monitor and Grid Computing



Exhibition about the history of computing at CERN

The ground floor inside the computing center, where you could talk about:

Cooling system



CERN network



Emergency power

The basement, where you could talk about:

Tape capacity



Water cooled racks

General public visits Itinerary

General public visits should be restricted to the following areas: The balcony on the 1st floor with a view of the main room (19 people maximum for security reasons)

The exhibition area on the ground floor

History of Computing at CERN

Some pictures about the evolution of computing at CERN throughout the years can be found on the corridor that goes from building 31 to building 513. Visitors can see the evolution from the first 'human computer' to the grid.



The exhibiton area shows punch cards, modems and hard disks used 50 years ago! It also contains the computer used by Tim Berners Lee to invent the web.



Some interesting links about the history of computing at CERN: CERN's first computer

Computing at CERN in the 70s

Computing at CERN in the 80s

Computing at CERN in the 90s

30 years of Computing at CERN - Part 1

30 years of Computing at CERN - Part 2

30 years of Computing at CERN - Part 3

Slides and pictures about the History of Computing at CERN

50 years of research at CERN: from past to future (Computing)

Operators room

There is always someone working in the Operators room. 24 hours a day, 365 days a year. It's normally just one person. There are documented procedures to operate the different IT services. Critical IT services are also covered by an on-call service managed by experts and contacted by the operator when necessary. The computing center is managed mostly by automated and semi-automated software (anything else would be impossible on this scale!)

IT Services for CERN

Some of the IT services that are run at the CERN computing center: Mail servers: 99% of the mail that reaches CERN is SPAM!

Databases: Used in many areas within CERN like accelerator operations, physics or human resources. Main technologies are Oracle and NetApps since they offer scalability, stability and performance. 140 DBs (14DBs for experiments) of 1-12TB of size. 3000 disks and 3PB raw disk. (source)

Desktop services: 20000 Linux + 9000 Windows + 2000 MAC (source)

AFS: 15000 clients (hosts), 28000 users (home directories), 150 TB used space, 1.5 billion files, 50000 acesses/s (source)

Openlab

CERN openlab is a framework for evaluating and integrating cutting-edge IT technologies or services in partnership with industry, focusing on future versions of WLCG. Through close collaboration with leading industrial partners, CERN acquires early access to technology that is still years from general computing market. In return, CERN offers expertise and highly demanding computing environment for pushing new technologies to their limits and provides a neutral ground for carrying out advanced R&D with various partners.



CERN openlab is set to enter its sixth three-year phase at the start of 2018. There are currently several industrial partners: Huawei, Intel, Oracle, HP and Siemens, and many other industrial contributors. The technical activities are organised in different domains of competence: Through the Automation and Controls Competence Centre (ACCC), CERN and Siemens are collaborating on security, as well as the opening of automation tools towards software engineering and handling of large environments.

In partnership with Oracle, the Database Competence Centre (DCC) focuses on items, such as data distribution and replication, monitoring and infrastructure management, highly available database services, application design, as well as automatic failover and standby databases.

One focus of Networking Competence Centre (NCC) was CINBAD, a research project launched by CERN and HP ProCurve to understand the behaviour of large computer networks (10.000+ nodes) in High Performance Computing or large Campus installations. Since February 2010, a new openlab team under the codename WIND (Wireless Infrastructure Network Deployment) carries out research and provides new algorithms, guidelines and solutions to support the deployment and operation of the Wi-Fi infrastructure at CERN.

The Platform Competence Centre (PCC) focuses on the PC-based computing hardware and the related software. In collaboration with Intel, it addresses important fields such as thermal optimisation, application tuning and benchmarking. It also has a strong emphasis on teaching. To know more, please check the Openlab web.

Internet Exchange Point

CERN's primary mission is to provide facilities for high energy particle physics experiments. CERN is open to scientists from its 20 member states and from all other countries of the world. This makes CERN one of the largest sources of numerical scientific data in the world. Computer networking, and in particular Internet connectivity, is therefore a mission-critical requirement. CERN operates an IXP in order to facilitate the exchange of Internet traffic in the region and to maximize its own Internet connectivity.



An Internet exchange point is a physical infrastructure through which Internet service providers exchange internet traffic between their networks.



The CERN IXP is an open exchange that provides peering facilities between Internet Service Providers and Telecom Operators. For more information, please check the CIXP pages.

Computing Power

Some numbers about the computing power at the CERN computing center (June 2017): Number of machines: 14,600 servers (11,100 Meyrin and 3,500 Wigner) with 220,000 cores (166,000 Meyrin and 56,000 Wigner) (Source)

All physics computing is done using the Linux operating system and commodity PC hardware. There are few Solaris server machines as well, especially for databases (Oracle).

The LHC grid

LHC data is stored and analized using the computing resources of the WLCG grid infrastructure (Worldwide LHC grid). WLCG is a worldwide collaboration of 169 sites belonging to two major grid infrastructures: the European Grid Initiative (EGI - 139 sites) and the Open Science Grid (OSG - 29 sites) in US. There is also one joint site comprising several nordic countries from the NDGF infrastructure. WLCG sites are organised in a Tier model. Tier 0 is the CERN computing center where the data from the 4 LHC experiments is at first stored. Then there are 14 Tier-1 sites where LHC data is also replicated on tape. Finally, there are 155 Tier 2 sites providing extra computing resources. LHC data is then processed using the computing power provided by all these sites.



Check the WLCG web for more details on the project.



For general information about the Grid, please visit The Grid cafe.



Some useful numbers for WLCG:

Average number of WLCG jobs executed every day: 2 million

Pledged CPU capacity APR-15 to MAR-16: 1713100 HS06-years

Pledged Disk capacity APR-15 to MAR-16: 141 PB

Pledged Tape capacity APR-15 to MAR-16: 269 PB

CERN provides 20% of total WLCG Resources For real time information, you can check:

The WLCG Google Earth Dashboard that it is also installed at the entrance of the computer center. It shows real time data transfers and jobs being executed in the WLCG infrastructure. It also contains detailed information about the different WLCG sites when you click on a particular site. The dashboard can be installed in any computer, check the User Guide for more details.

Cooling

Air conditioning is a major problem for data centres everywhere in the world at the moment. As processors get faster they also get hotter and at the moment we are getting a greater increase in heat than in performance. Rack machines are even worse as they are densely packed with processors.



Some of the racks at the computing center contain only a few machines in them since there's not enough cooling capacity now to fill them with more machines. The room was designed with one supercomputer in a corner in mind, not several thousand processors!



It's interesting to mention how the racks are placed. They use a Hot/Cold aisle configuration: the front of the racks are facing each other on the 'cold' aisle and expel heat out in their backs to the 'hot' aisle. The doors and roofs placed in the cold aisles increase efficiency by preventing warm air from mixing unhelpfully with the cold air. The cold air comes out from the floor inside the 'cold' aisle. The cold air is introduced in the building through the big blue pipes coming from the roof and going down to the floor. 3 chillers are responsible for cooling down the air. This process consumes no energy during the winter months where cold air is directly taken from outside. In the last upgrade works of the computing center, a new special IT room has been built on the back of the main room on the ground floor. This special room hosts critical IT equipment. This room uses water-cooled racks with passive heat exchangers (they don't consume extra power). These racks are more efficient and are able to cool down 10kW of equipment, compared to 4kW equipment that can be cooled with the hot/cold aisle racks. Water-cooled racks are also more expensive and it is more complicated to install them since water pipes are needed. In the basement, water pipes are easily accesible and water-cooled racks have also been installed, these ones can be shown during the visit.



Check the following interesting articles and presentations: Article by the Google Vice President of Operations in the CERN courrier, about power consumption in computer centers.

Article giving an overview of CERN's approach to energy efficient computing.

Presentation explaining the cooling approach of the computer center.

Emergency power

In the right back part of the main room is the 'critical area', backed by diesel capacity. Everything else has UPS (Uninterruptible power supply). This is an electrical apparatus that provides emergency power when the main input poweer source fails, but only for a few minutes, which is enough to switch between the French and the Swiss power in case of problems. Some (but not all) of the chillers are backed by UPS and diesel as well.

Networking