Hp Dl360 G7 Memory Slots

Posted By admin On 03/04/22

From Bicom Systems Wiki

  • 2Hardware Requirements
  • 3Deployment Guide
  • 4Installation wizard steps
  • 5Setup wizard steps

HP ProLiant DL360 G7 Server Data sheet Get superior performance in a compact footprint If space is a premium consideration, quality is a priority, and consolidation is the need, then look no further—the HP ProLiant DL360 G7 Server is designed to work well in limited spaces and delivers superior performance with improved consolidation. Memory intensive applications benefit from memory buffers, faster memory speeds, 4:1 memory interleaving, online spares and larger memory capacity. The DL360 G7 continues to provide the best in efficiency with Data Center Smart Grid Technologies like the Sea of Sensors, Dynamic Power Capping and High Efficiency/Right-Sized power supplies. Page 50 QuickSpecs HP ProLiant DL360 Generation 7 (G7) Power Specifications HP 750W Common Slot Gold Hot Plug Power Supply Kit Part Number 512327-B21 Input Voltage Range (V rms) 100 to 240 Frequency Range (Nominal) (Hz) 50/60 Nominal Input Voltage (Vrms). For best ProLiant DL360 G7 Server performance use the maximum amount of 48GB (ECC), 288GB (Dual Rank RDIMM), 384GB (Quad-Rank RDIMM), fill all the slots with the max allowed memory per slot for your Server.

HP ProLiant DL360 G7 Base - Xeon E5640 2.66 GHz - 6 GB - 0 GB overview and full product specs on CNET.

Introduction

Hp dl360 g7 memory slots download

The following guide describes minimal and recommended hardware requirements as well as procedures for a successful deployment and maintenance of HP ProLiant DL360 G7 in the SERVERware 3 environment.

Hardware Requirements

Server requirements

HP ProLiant DL360 G7 with the following:

HARDWARE MINIMUM SYSTEM REQUIREMENTS RECOMMENDED SYSTEM REQUIREMENTS
CPU 2.4 GHz Quad-Core Intel Xeon Processor with 12MB Cache - E5620 2.93 GHz Hex-Core Intel Xeon Processor with 12MB Cache - X5670
RAM 16GB Memory (4x4GB) PC3-10600R 32GB Memory (4x8GB) PC3-10600R
Ethernet 2 network interfaces ( for SERVERware Mirror edition ), 3 network interfaces ( for SERVERware Cluster edition ) 4 network interfaces ( for SERVERware Mirror edition ), 6 network interfaces ( for SERVERware Cluster edition )
Controller Smart Array P410i 512MB BBWC 0-60 RAID Smart Array P410i 512MB BBWC 0-60 RAID
Disk 2 x 100GB 15K RPM SAS 2.5' HP Hard Disk Drive ( for system ), 2 x 300GB 10K RPM SAS 2.5' HP Hard Disk Drive ( for storage ) 2 x 100GB 15K RPM SAS 2.5' HP Hard Disk Drive ( for system ), 4 x 600GB 10K RPM SAS 2.5' HP Hard Disk Drive ( for storage )
PSU Redundant HP 460 Watt PSU Redundant HP 750 Watt PSU


IMPORTANT NOTE: Software RAID (including motherboard) implementations are not supported and could cause potential problems that Bicom Systems would not be able to support.

KVMoIP (Keyboard, Video, Mouse over IP) remote access to each server

  • Remote power management support (remote reset/power off/on)
  • Remote access to BIOS
  • Remote SSH console access
  • A public IP assigned to KVMoIP
  • If KVMoIP is behind firewall/NAT, the following ports must be opened: TCP (80, 443, 5100, 5900-5999)

SERVERware installation media is required in one of the following forms

  • DVD image burned on to a DVD and inserted into DVD drive
  • USB image burned onto 2GB USB drive and inserted into operational USB port

Deployment Guide

Network setup


How-to instructions regarding network setup are available on the following page:

RAID Controller setup

- Press F8 during POST to enter RAID setup configuration utility
- Create logical drive (RAID 1) for 2 100GB drives (for system)
- Create logical drive (RAID 0) for each still available drive (for storage)
- Exit RAID setup configuration utility

Creation of USB installation media


USB images and instructions are available on the following how-to page:



Boot target machine from the USB installation media. The following welcome screen appears.

If the live system was able to pickup IP address from DHCP server, it will show so on this screen. You can then access the system remotely via ssh on port 2020 with username 'root' and password 'bicomsystems' and continue installation. There are several options offered on the welcome screen:

  • Exit - Choose this option to exit installation wizard. This option will open live system command line shell.
  • Verify Media - This option will go trough the installation media files, match them against previous stored checksum and check for corruption.
  • Next - Proceed to the next step.
  • Network - Configure IP address for the remote access to the installation wizard.

Step 1:

Select the type of installation you want to install, Storage/Controller or Host (Processing or Backup).

Storage/Controller is the network storage for VPSs on the SERVERware network. In order to use mirrored setup, you will have to install two physical machines as Storage/Controller. Processing Host is computation resource that will attach and execute VPSs from storage via SAN (Storage Area Network).

Step 2:

Installation wizard will proceed to check for available disks.

Step 3:

Select physical disk for system installation. Volume for storage will be automatically created from disks that are not selected in this step.

Step 4:

Confirmation dialog appears.

Step 5:

Installation wizard will now proceed with installation of SERVERware operating system.

Step 6:

After OS is installed, configure network dialog appears.

Select Create virtual bonding network interface to proceed to network interface creation.

From the list of available network interfaces, select two interfaces for bonding interface creation.

From the list of available modes, (select agregattion to suit your network configuration eg. 802.3ad), for bonding interface creation.

Enter a name for a new bonding interface.

Click Next to continue and then chose one of the options to configure new bonding interface.

After finishing network configuration, click next to finish installation. Wizard will initiate reboot.

Step 7:


Redo installation steps for the second (mirrored) machine.


Open your browser and enter bondLAN IP you configured upon installation. After confirming self signed certificate, SERVERware setup wizard login screen appears. Enter administration password which is by default 'serverware'.

Step 1:

After successful login, SERVERware EULA appears.

Acceptance of the EULA leads to the next step.

Step 2:

Enter your license number, administrator's email, set new administrator's SERVERware GUI password. This password will also apply to shell root account. Select your time zone and click next to continue.

Step 3:

Depending on the license you acquired, this step will offer you to configure LAN, SAN and mirror RAN network. LAN is a local network for SERVERware management and service provision. SAN is a network dedicated to connecting SERVERware storage with processing hosts. RAN is a network dedicated to real time mirroring of two servers.

Before proceeding with the network configuration, use command line interface utility netsetup to create virtual bonding network interfaces bondRAN and bondSAN. Also, before proceeding to the next step, use command line interface utility netsetup to create and setup virtual bonding network interfaces bondRAN and bondSAN on the second (mirrored) machine.

Setup wizard will suggest default configuration for the network interfaces. The machines must be on same LAN network and the same SAN and RAN network. Modify network configuration if needed and click Next to proceed to the next step.

Step 4:

Choose the name for the cluster if you don't like the one generated by the SERVERware (it must be valid per hostname constrains).

Select from the list or enter the LAN IP address of the second (mirrored) machine. Purpose of mirrored setup is to provide storage redundancy, so it will need a few more configuration parameters. LAN Virtual IP is a floating IP address for accessing mirrored storage server. SAN Virtual IP is floating IP address used for access to the storage. Administration UI IP address will be used for CONTROLLER VPS (GUI).

CONTROLLER VPS is setup automatically on the storage server, and its purpose is to provide administrative managing web console as well as to control and monitor SERVERware hosts and VPSs.

Once you click Finish button, wizard will initialise and setup the network storage. When complete, setup will present the summary info.

Wait a few seconds for CONTROLLER to start and click on the Controller Web Console link to start using SERVERware and creating VPSs.

Storage Expansion and Replacement


Replacing one damaged HDD from RAID 1 on secondary server HP ProLiant DL360 G7 while system is running


When one of the HDD's from mirror is damaged, next procedure should be followed:

Setup :We have 2 mirrored storage servers, each server have 2 HDD‘s in RAID 1 (Mirror) for storage.

For this procedure, first we are going check the status of the Smart Array.

Connect trough ssh to storage server with faulty HDD and using HP utility hpacucli, execute the following:

Hp Dl360 G7 Memory Slots Download


As we can see from output one of the HDD’s have 'Failed' status.

Our next step is to remove the faulty HDD from the bay-6 and insert new HDD in bay-6.

After replacement we can use the same hpacucli command to check the status of the newly installed HDD.


Now we can see that HP Raid controller is rebuilding data on the replaced HDD,

and the status of logicaldrive 3 (931.5 GB, RAID 1, Recovering, 0% complete).

This can take a while to complete depending on a storage size.

After completion, HDD should have status 'OK'.

This is the end of our replacement procedure.

Replacing one damaged HDD from RAID 0 on secondary server HP ProLiant DL360 G7 while system is running


When one of the HDD's from mirror is damaged, next procedure should be followed:

Setup :We have 2 mirrored storage servers.Each server have 2 HDD‘s, each of them setup as RAID 0.

For this procedure, first we are going check the status of the Smart Array.

Connect trough ssh to storage server with faulty HDD and using HP utility hpacucli, execute the following:


This is output of the hpacucli command when everything is ok.

If one of the drives fail, the output would be:


If HDD fails, zpool will be in the state: DEGRADED on the primary server.

Next, we should physically replace the damaged HDD in the server bay 5 and run hpacucli again.


From output we can see that the status of the physicaldrive is OK but status of logicaldrive 3 is Failed.

We need to delete failed logical drive from smart array (in this case logicaldrive 3) and recreate it again with new drive.

We can do this using hpacucli command.

Checking the status again.

New physicaldrive is now unassigned.Create logicaldrive 3 in RAID 0 using new drive in bay 5.To create logical drive use hpacucli command:

Command explaination:

We have created logicaldrive 3 in RAID 0 configuration, using disk 2I:1:5 in bay 5.
Checking the status again.

To find out more detailed info on logicaldrive 3 and which block device name system has assigned to the logicaldrive 3, use the following hpacucli command:

When we have block device name (Disk Name: /dev/sdd), we need to make a partition table for the newly installed disk.

To make a partition table use parted:

Create the partition with the name to match our faulty partition on the primary server. We have this name from above:

SW3-NETSTOR-SRV2-1 FAULTED 3 0 0 too many errors

Our command in this case will be:

We have now added a new partition and a label. Now we need to edit mirror configuration file: /etc/tgt/mirror/SW3-NETSTOR-SRV2.conf


IMPORTANT:Before we can edit configuration file, we need to logout iSCSI session on the primary server.

Connect trough ssh to the primary server and use iscsiadm command to logout:

Example:

Now we can proceed editing configuration file on the secondary server.

Hp Dl360 G7 Memory Configuration Tool


Source of the file looks like this:

We need to replace iscsi-id to match the ID of the changed HDD.

To see the new ID use this command:

Now edit configuration file.

Replace ID with the new one and save file.


Next, we need to update the target from the configuration file we have just created.

To update tgt-admin info from configuration file, we will use following command:

This ends our procedure on the secondary server.

Next, on the primary server, add newly created virtual disk to the zfs pool.


First we need to login to the iscsi session we have exported before:

Example:

After this we can see zpool status:


From the output we can see:

SW3-NETSTOR-SRV2-1 FAULTED status of secondary HDD


Now we need to change guid of old HDD to guid of new HDD, so that zpool can identify new HDD.

To change guid from old to new in zpool, first we need to find out new guid.

We can use zdb command to find out:

The important line for from zdb output:

The guid part need to be updated to zpool.

We can update guid with the command:


Example:


Now check zpool status:

You need to wait for zpool to finish resilvering.

This ends our replacement procedure.


Expanding storage with 2 new HDD’s in RAID 1 on HP ProLiant DL360 G7 while system is running

Proliant Dl380 G7 Memory Slots


Insert 2 new HDD’s in storage server empty bays.

Connect trough ssh to the server and using hpacucli create new logical drive and add new HDD’s to logical drive.

Use the following command to view configuration:

We can see from the output that new HDD’s appear under unassigned.

We need to create logical drive in RAID 1 from unassigned HDD’s.

To create logicaldrive use this command:

And again check the status:


Now we have new logical drive logicaldrive 4 (931.5 GB, RAID 1, OK).

After creating new logical drive we need to make a partition table on a new drive.

We need to find out which block device name system has assigned to the logicaldrive 4.

To find out type:


Use parted to make partition table for new a logical drive.

And create a new label.

IMPORTANT: label must be formated as before SW3-NETSTOR-SRV2-2
Name of the label “SRV2” comes from the server number and “-2” is the second virtual drive:

1. SW3-NETSTOR-SRV2 - this means virtual disk on SERVER 2
2. -2 - this is the number of the virtual disk (virtual disk 2)

Now add a label to a new drive.

We have to update configuration file for SERVERware to know what block device to use.

We can get this informations by listing devices by-id:

Now copy disk ID scsi-3600508b1001cad552daf39b1039ea46a and edit configuration file:

Our case:

File should look like this:


Add one more <target> to a configuration file, to include new <target>:

<target SW3-NETSTOR-SRV2-2>

and ID:

<direct-store /dev/disk/by-id/scsi-3600508b1001cad552daf39b1039ea46a>

After editing, the configuration file should look like this:

Save file and exit.

We need to edit one more configuration file to add a location of the secondary HDD:

Add a new storage name comma separated after existing storage name.

Edit file and apply the change to look like this:

Save file and exit.


Now repeat all these steps on another server.


After all these steps are done on both servers, we need to link storage from the secondary server to the zpool on the primary server.

Connect trough ssh to the secondary server and export target to iSCSI using tgt-admin tool:


This ends our procedure on the secondary server.

Connect trough ssh to the primary server and use iscsiadm discovery to find new logical disk we have exported on the secondary server.

First, find out network address of the secondary storage server:

In output we will get information we need for the discovery command,192.168.1.46:3259 (IP address and port).

Now using iscsiadm discovery, find new logical drive:

Now login to the exported iSCSI session.Use this command:

To see newly added logical drive use:

Now we need to expand our pool with new logical drives.You need to be careful with this command. Check names of logical drives to make sure you got the right name.

Now in the zpool, we should see newly added logical volume:

This is the end of our storage expansion procedure.

Retrieved from 'http://wiki.bicomsystems.com/index.php?title=SERVERware_3_Deployment_Guide_for_HP_ProLiant_DL360_G7&oldid=38170'