martes, 24 de octubre de 2017

Agregar espacio a un filesystem agregando un disco con lvm

Tenemos un equipo, para el que necesitamos hacer crecer un filesystem de aplicacion en 50gb.

Asi se ve el fs antes de aplicar cualquier cambio.

[root@linux3 ~]# df -h | grep opt
/dev/mapper/VG_optaplicacion-lv_optaplicacion
                      118G  100G   13G  89% /opt/aplicacion

La tarea se divide en cuatro partes principales
#1   Agregar el disco/lun fisicamente.
#2   Reconocer/Formatear el disco dentro del sistema operativo.
#3   Agregar el device dentro de lvm.
#4   Hacer crecer el file system.


#1   Agregar el disco/lun fisicamente.
Este equipo es una virtual de vmware, lo vemos asi

[root@linux3 ~]#  dmidecode -t system | less
# dmidecode 2.12
SMBIOS 2.4 present.

Handle 0x0001, DMI type 1, 27 bytes

System Information
        Manufacturer: VMware, Inc.
        Product Name: VMware Virtual Platform

Por lo que, el disco, se lo agrego desde el vcenter, como muestra en las siguientes imagenes (estan editadas para no mostrar info sensible)

Edit Settings:
 Add:
 Hard Disk:






#2   Reconocer/Formatear el disco dentro del sistema operativo.

El disco ya esta asignado al equipo fisicamente, lo buscamos con:
[root@linux3 ~]# fdisk -l
...
...
Disk /dev/sdg: 53.7 GB, 53687091200 bytes
64 heads, 32 sectors/track, 51200 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Como vemos, el disco, no tiene particion, asi que se la creamos con fdisk

[root@linux3 ~]#  fdisk /dev/sdg
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x5883c851.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


WARNING: DOS-compatible mode is deprecated. It's strongly recommended to

         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n

Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-51200, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-51200, default 51200): 
Using default value 51200

Command (m for help): t

Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.
[root@linux3 ~]# 



#3   Agregar el device dentro de lvm.

Necesito meter el disco recien agregado dentro del vg de lvm que estoy utilizando. Arriba de todo, habia tirado un df -h que me mostraba que el vg en cuestion es VG_optaplicacion

[root@linux3 ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree
  VG_logaplicacion   1   1   0 wz--n- 249.99g    0 
  VG_optaplicacion   3   1   0 wz--n- 119.99g    0 
  VolGroup00      2   7   0 wz--n- 149.56g 9.56g

lo amplio:

[root@linux3 ~]# vgextend VG_optaplicacion /dev/sdg1  
  Physical volume "/dev/sdg1" successfully created
  Volume group "VG_optaplicacion" successfully extended

Ahora, hago crecer el lv del tamaño de todo el disco que le puse.

[root@linux3 ~]# lvextend /dev/VG_optaplicacion/lv_optaplicacion /dev/sdg1
  Size of logical volume VG_optaplicacion/lv_optaplicacion changed from 119.99 GiB (30717 extents) to 169.98 GiB (43516 extents).
  Logical volume lv_optaplicacion successfully resized.


#4   Hacer crecer el file system.

Ahora, solo hace falta hacer crecer el fs, o sea, que se vea cuando tiro un df y que se pueda usar.

[root@linux3 ~]# resize2fs /dev/VG_optaplicacion/lv_optaplicacion 
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/VG_optaplicacion/lv_optaplicacion is mounted on /opt/aplicacion; on-line resizing required
old desc_blocks = 8, new_desc_blocks = 11
Performing an on-line resize of /dev/VG_optaplicacion/lv_optaplicacion to 44560384 (4k) blocks.
The filesystem on /dev/VG_optaplicacion/lv_optaplicacion is now 44560384 blocks long.


Chequeo que todo este bien y listo el pollo.

[root@linux3 ~]# df -h | grep opt
/dev/mapper/VG_optaplicacion-lv_optaplicacion
                      168G  100G   60G  63% /opt/aplicacion

miércoles, 23 de agosto de 2017

Agregar repositorios optional y extras red hat satellite.

Esta es una entrada sencilla para agregar repositorios extra de nuestro satelite ya configurado a nuestro servidor.

Desde nuestro servidor p-01 vemos que repositorios tenemos configurados.

[root@p-01 ~]# yum repolist 
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
repo id                                                    repo name                                                          status
!rhel-7-server-optional-rpms/7Server/x86_64                Red Hat Enterprise Linux 7 Server - Optional (RPMs)                12,683
!rhel-7-server-rpms/7Server/x86_64                         Red Hat Enterprise Linux 7 Server (RPMs)                           16,956
repolist: 29,639

En este caso estan los rhel-7-server-rpms y rhel-7-server-optional-rpms

Verifico que repositorios tengo disponibles para habilitar
Me muestra el nombre la url y si esta o no habilitado.

[root@p-01 ~]# subscription-manager repos --list


+----------------------------------------------------------+
    Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID:   rhel-7-server-satellite-tools-6.2-rpms
Repo Name: Red Hat Satellite Tools 6.2 (for RHEL 7 Server) (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/7Server/$basearch/sat-tools/6.2/os
Enabled:   0

Repo ID:   rhel-server-rhscl-7-rpms
Repo Name: Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/rhscl/1/os
Enabled:   0

Repo ID:   rhel-7-server-optional-rpms
Repo Name: Red Hat Enterprise Linux 7 Server - Optional (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/optional/os
Enabled:   1

Repo ID:   rhel-7-server-openstack-10-tools-rpms
Repo Name: Red Hat OpenStack Platform 10 Tools for RHEL 7 Server (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/openstack-to
           ols/10/os
Enabled:   0

Repo ID:   rhel-7-server-rh-common-rpms
Repo Name: Red Hat Enterprise Linux 7 Server - RH Common (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/rh-common/os
Enabled:   0

Repo ID:   rhel-7-server-rhn-tools-rpms
Repo Name: RHN Tools for Red Hat Enterprise Linux 7 Server (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/rhn-tools/os
Enabled:   0

Repo ID:   rhel-7-server-v2vwin-1-rpms
Repo Name: Red Hat Virt V2V Tool for RHEL 7 (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/v2vwin/os
Enabled:   0

Repo ID:   rhel-7-server-rpms
Repo Name: Red Hat Enterprise Linux 7 Server (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/os
Enabled:   1

Repo ID:   rhel-7-server-supplementary-rpms
Repo Name: Red Hat Enterprise Linux 7 Server - Supplementary (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/$releasever/$basearch/supplementar
           y/os
Enabled:   0

Repo ID:   rhel-7-server-extras-rpms
Repo Name: Red Hat Enterprise Linux 7 Server - Extras (RPMs)
Repo URL:  https://sat.xxx/pulp/repos/xxx/Library/content/dist/rhel/server/7/7Server/$basearch/extras/os
Enabled:   0

Habilito el que me falta y reviso

[root@p-01 ~]# subscription-manager repos --enable rhel-7-server-extras-rpms



[root@p-01 ~]# yum repolist
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
rhel-7-server-extras-rpms                                                                                    | 2.0 kB  00:00:00     
rhel-7-server-optional-rpms                                                                                  | 2.0 kB  00:00:00     
rhel-7-server-rpms                                                                                           | 2.0 kB  00:00:00     
(1/9): rhel-7-server-extras-rpms/x86_64/group                                                                |  130 B  00:00:00     
(2/9): rhel-7-server-extras-rpms/x86_64/updateinfo                                                           | 182 kB  00:00:00     
(3/9): rhel-7-server-extras-rpms/x86_64/primary                                                              | 154 kB  00:00:00     
(4/9): rhel-7-server-optional-rpms/7Server/x86_64/group                                                      |  19 kB  00:00:00     
(5/9): rhel-7-server-optional-rpms/7Server/x86_64/updateinfo                                                 | 1.6 MB  00:00:00     
(6/9): rhel-7-server-optional-rpms/7Server/x86_64/primary                                                    | 3.6 MB  00:00:00     
(7/9): rhel-7-server-rpms/7Server/x86_64/group                                                               | 595 kB  00:00:00     
(8/9): rhel-7-server-rpms/7Server/x86_64/updateinfo                                                          | 2.2 MB  00:00:00     
(9/9): rhel-7-server-rpms/7Server/x86_64/primary                                                             |  23 MB  00:00:00     
rhel-7-server-extras-rpms                                                                                                   603/603
rhel-7-server-optional-rpms                                                                                             12683/12683
rhel-7-server-rpms                                                                                                      16956/16956
repo id                                                    repo name                                                          status
!rhel-7-server-extras-rpms/x86_64                          Red Hat Enterprise Linux 7 Server - Extras (RPMs)                     603
!rhel-7-server-optional-rpms/7Server/x86_64                Red Hat Enterprise Linux 7 Server - Optional (RPMs)                12,683
!rhel-7-server-rpms/7Server/x86_64                         Red Hat Enterprise Linux 7 Server (RPMs)                           16,956
repolist: 30,242

viernes, 11 de agosto de 2017

Conectarse por consola a un redhat linux instalado en un Dell PowerEdge R910 por ssh:

Nos encontramos con el equipo encendido y funcionando ok y queremos dejar configurada la consola para acceder al management por ssh y de ahi conectarnos por serial al linux.

Se modifico el archivo /etc/inittab en el linux se agrego la siguiente linea (despues de probar varias velocidades)
co:2345:respawn:/sbin/agetty -h -L 115200 ttyS1

Se agrego la linea ttyS1 al final del archivo
/etc/securetty

Se le dio kill -1 al init.


Ahora nos conectamos al idrac (la ilom de dell) por ssh
ssh root@10.XX.XX.XX
password: calvin  (es la default)

/admin1-> console com2

Connected to Serial Device 2. To end type: ^\
plnxxx.redhat4ever.com.ar login:




Si queremos ver la consola desde el grupo o sea ver como bootea el equipo debemos tambien configurar unas lineas del /etc/grub.conf

agregamos
serial --unit=1 --speed=115200
terminal --timeout=10 serial
comentamos
#splashimage=(hd0,2)/grub/splash.xpm.gz
#hiddenmenu

y modificamos la linea del kernel
 kernel .......... console=ttyS1,115200n8r console=tty1

martes, 1 de agosto de 2017

Asignacion de LUNS IBM-2145

Este articulo pretende ser un paso a paso para asignar luns de un IBM-2145 a un Red Hat Enterprise Linux Server release 6.7.
El articulo como siempre esta basado en los recetas, instrucciones y consejos de mis colegas, Adrian, Gonzalo, Norman y Javier. 

Partimos de un server con Red Hat 6 recien instalado y se nos asignaron 9 luns de 1Tb.

El paso a paso consta de 4 partes generales:


-------------------------------------------------------------------------------------------------------------------------
# Configuracion de Multipath.


Creamos el archivo de configuracion y rescaneamos

[root@plweb1 ~]# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf

[root@plweb1 ~]# /etc/init.d/multipathd restart

[root@plweb1 ~]# multipath -F ; multipath -v3 ; multipath -ll

[root@plweb1 ~]# mpathconf --enable --with_multipathd y 
Me guardo la salida de multipath en un archivo para tener la configuracion vieja.

[root@plweb1 ~]# multipath -ll > multipath.ll

Nos guardamos los discos.

[root@plweb1 ~]# multipath -l | grep dm-
mpathb (3600507680c8081eee000000000000006) dm-6 IBM,2145
mpathc (3600507680c8081eee000000000000008) dm-7 IBM,2145
mpathd (3600507680c8081eee00000000000000a) dm-8 IBM,2145
mpathe (3600507680c8081eee00000000000000c) dm-9 IBM,2145
mpathf (3600507680c8081eee00000000000000e) dm-10 IBM,2145
mpathg (3600507680c8081eee000000000000007) dm-11 IBM,2145
mpathh (3600507680c8081eee000000000000009) dm-12 IBM,2145
mpathi (3600507680c8081eee00000000000000b) dm-13 IBM,2145
mpathj (3600507680c8081eee00000000000000d) dm-14 IBM,2145

Modifico el archivo para que me quede asi, el archivo cambia casi entero, resalto lo que se le agrega.

[root@plweb1 ~]#  cat /etc/multipath.conf
###############################################################################
# Multipath.conf file for IBM Storage
#
# Version 3.01
#
# IMPORTANT: If you change multipath.conf settings after the DM MPIO devices
# have already been configured, be sure to rerun "multipath".
###############################################################################
#
#
# defaults:
#
#     polling_interval   : The interval between two path checks in seconds.
#
#     failback           : The failback policy should be set to "immediate"
#                          to have automatic failback, i.e. if a higher
#                          priority path that previously failed is restored,
#                          I/O automatically and immediately fails back to
#                          the preferred path.
#
#    no_path_retry       : Use this setting in order to deal with transient
#                          total path failure scenarios. Indicates that the if
#                          all paths are failed for 10 checks (iterations of
#                          the checkerloop) then will set the device to
#                          .fail_if_no_path. so that I/O will not stay queued
#                          forever and I/O errors are returned back to the
#                          application. This value should be adjusted based on
#                          the value of the polling_interval. Basically, with a
#                          larger polling_interval, this means that the amount
#                          of time of allowed total path failure will be
#                          longer, since the tolerance time is
#                          (no_path_retry * polling_interval) seconds.
#                          SHOULD NOT BE USED WITH .features..
#
#    rr_min_io           : The number of IOs to route to a path before switching
#                          to the next path in the same path group
#
#    path_checker        : The default 'readsector0' path checker uses SCSI
#                          READ (opcode 0x28) which doesn't work in clustered
#                          environments. TUR (Test Unit Ready) does work in
#                          clustered environments with storage that subscribes
#                          to the SCSI-3 spec.
#
#    user_friendly_names : With this value set to .yes., DM MPIO devices will
#                          be named as .mpath0., .mpath1., .mpath2., etc. ...
#                          The /var/lib/mulitpath/bindings file is
#                          automatically generated, mapping the .mpathX. name
#                          to the wwid of the LUN. If set to "no", use the
#                          WWID as the alias. In either case this be will be
#                          overriden by any specific aliases in this file.
#
#
defaults {
    polling_interval    30
    failback            immediate
    no_path_retry       5
    rr_min_io           100
    path_checker        tur
    user_friendly_names yes
}

#

# An example of blacklisting a local SCSI disk.
# Here a local disk with wwid SIBM-ESXSMAN3184MC_FUFR9P29044K2 is
# blacklisted and will not appear when "multipath -l(l)" is invoked.
#
#
#blacklist {
#    wwid SIBM-ESXSMAN3184MC_FUFR9P29044K2
#}

#

# An example of using an alias.
# NOTE: this will override the "user_friendly_name" for this LUN.
#
# Here a LUN from IBM storage with wwid 3600507630efffe32000000000000120a
# is given an alias of "IBM-1750" and will appear as "IBM-1750
#(3600507630efffe32000000000000120a)", when "multipath -l(l)" is invoked.
#
#
#multipaths {
#    multipath {
#        wwid 3600507630efffe32000000000000120a
#        alias IBM-1750
#    }
#}

#

#  devices    : List of per storage controler settings, overrides default
#              settings (device_maps block), overriden by per multipath
#              settings (multipaths block)
#
#  vendor     : Vendor Name
#
#  product    : Product Name
#
#  path_grouping_policy : Path grouping policy to apply to multipath hosted
#                         by this storage controller
#
#  prio_callout : The program and args to callout to obtain a path
#              weight. Weights are summed for each path group to
#              determine the next PG to use case of failure.
#              NOTE: If no callout then all paths have equals weight.
#
#
devices {
# These are the default settings for 2145 (IBM SAN Volume Controller)
# Starting with RHEL5, multipath includes these settings be default
    device {
        vendor                   "IBM"
        product                  "2145"
        prio                     "alua"
        path_grouping_policy     group_by_prio
        #prio_callout             "/sbin/mpath_prio_alua /dev/%n"
    }

# These are the default settings for 1750 (IBM DS6000)

# Starting with RHEL5, multipath includes these settings be default
    device {
        vendor                   "IBM"
        product                  "1750500"
        prio                     "alua"
        path_grouping_policy     group_by_prio
        #prio_callout             "/sbin/mpath_prio_alua /dev/%n"
    }

# These are the default settings for 2107 (IBM DS8000)

# Uncomment them if needed on this system
    device {
        vendor                   "IBM"
        product                  "2107900"
        path_grouping_policy     group_by_serial
    }

# These are the default settings for 2105 (IBM ESS Model 800)

# Starting with RHEL5, multipath includes these settings be default
    device {
        vendor                   "IBM"
        product                  "2105800"
        path_grouping_policy     group_by_serial
    }
}
multipaths {
        multipath {
               wwid                    3600507680c8081eee000000000000006
               alias                   mp_01
        }
        multipath {
               wwid                    3600507680c8081eee000000000000008
               alias                   mp_02
        }
        multipath {
               wwid                    3600507680c8081eee00000000000000a
               alias                   mp_03
        }
        multipath {
               wwid                    3600507680c8081eee00000000000000c
               alias                   mp_04
        }
        multipath {
               wwid                    3600507680c8081eee00000000000000e
               alias                   mp_05
        }
        multipath {
               wwid                    3600507680c8081eee000000000000007
               alias                   mp_06
        }
        multipath {
               wwid                    3600507680c8081eee000000000000009
               alias                   mp_07
        }
        multipath {
               wwid                    3600507680c8081eee00000000000000b
               alias                   mp_08
        }
        multipath {
               wwid                    3600507680c8081eee00000000000000d
               alias                   mp_09
        }

}


for f in  $(ls /sys/class/fc_host); do  echo "1" > /sys/class/fc_host/$f/issue_lip; done

for s in  $(ls /sys/class/scsi_host); do echo "- - -" > /sys/class/scsi_host/$s/scan; done
multipath -F ; multipath -v3 ; multipath -ll

# Alinemiento de discos. Particionado.
Segun recomendacion de emc se deben alinear los discos despues del sector 128.

Una manera con fdsik para entender el proceso entero:

[root@plweb1 ~]# fdisk /dev/mapper/mp_09
...
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-133674, default 1):_
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-133674, default 133674):_
Using default value 133674

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): x

Expert command (m for help): b
Partition number (1-4): 1
New beginning of data (1-2147472809, default 63): 128

Expert command (m for help): p

Disk /dev/mapper/mp_09: 255 heads, 63 sectors, 133674 cylinders


Nr AF  Hd Sec  Cyl  Hd Sec  Cyl     Start      Size ID

 1 00   1   1    0 254  63 1023        128 2147472682 8e
 2 00   0   0    0   0   0    0          0          0 00
 3 00   0   0    0   0   0    0          0          0 00
 4 00   0   0    0   0   0    0          0          0 00

Expert command (m for help): w
The partition table has been altered!

Con fdisk -lu chequeo que el sector inicial quede en 128.

[root@plweb1 ~]# fdisk -lu /dev/mapper/mp_09

Disk /dev/mapper/mp_09: 1099.5 GB, 1099511627776 bytes

255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disk identifier: 0x4c118c65

                      Device Boot      Start         End      Blocks   Id  System

/dev/mapper/mp_09p1             128  2147472809  1073736341   8e  Linux LVM

Ahora puedo o bien crear esta misma particion en cada uno de los discos a asignar o en este caso como se que todos los discos (luns) tienen la misma geometria y tamaño copiar la tabla de particiones de un disco a otro. (solo muestro uno de los 8 sfdisk que hago)

[root@plweb1 ~]# sfdisk -d /dev/mapper/mp_09 | sfdisk  /dev/mapper/mp_08

Chequeo que haya quedado ok. (muestro solo uno pero verifico en los 9 discos)

[root@plweb1 ~]# fdisk -lu /dev/mapper/mp_08

Disk /dev/mapper/mp_08: 1099.5 GB, 1099511627776 bytes

255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disk identifier: 0x00000000

                      Device Boot      Start         End      Blocks   Id  System

/dev/mapper/mp_08p1             128  2147472809  1073736341   8e  Linux LVM

# Creacion de Volume Groups y Logical Volumes con lvm.
Finalizado el particionado corro el comando partprobe para releer las particiones.

[root@plweb1 ~]# partprobe
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.

Inicializo con Pvcreate

[root@plweb1 ~]# pvcreate /dev/mapper/mp_01p1 /dev/mapper/mp_02p1 /dev/mapper/mp_03p1 /dev/mapper/mp_04p1 /dev/mapper/mp_05p1 /dev/mapper/mp_06p1 /dev/mapper/mp_07p1 /dev/mapper/mp_08p1 /dev/mapper/mp_09p1
  Physical volume "/dev/mapper/mp_01p1" successfully created
  Physical volume "/dev/mapper/mp_02p1" successfully created
  Physical volume "/dev/mapper/mp_03p1" successfully created
  Physical volume "/dev/mapper/mp_04p1" successfully created
  Physical volume "/dev/mapper/mp_05p1" successfully created
  Physical volume "/dev/mapper/mp_06p1" successfully created
  Physical volume "/dev/mapper/mp_07p1" successfully created
  Physical volume "/dev/mapper/mp_08p1" successfully created
  Physical volume "/dev/mapper/mp_09p1" successfully created

Creo el vg

[root@plweb1 ~]# vgcreate vg_ /dev/mapper/mp_01p1 /dev/mapper/mp_02p1 /dev/mapper/mp_03p1 \
> /dev/mapper/mp_04p1 /dev/mapper/mp_05p1 /dev/mapper/mp_06p1 \
> /dev/mapper/mp_07p1 /dev/mapper/mp_08p1 /dev/mapper/mp_09p1
  Volume group "vg_" successfully created

Creo los lv
[root@plweb1 ~]# lvcreate -n lv_opt_ar -L 7tb vg_
  Logical volume "lv_opt_ar" created.
[root@plweb1 ~]# lvcreate -n lv_opt_ar_archives -L 2tb vg_
  Volume group "vg_" has insufficient free space (524270 extents): 524288 required.

nunca anda porque los extents nunca son exactos asi que le pongo el 100% de lo que queda.
[root@plweb1 ~]# lvcreate -n lv_opt_ar_archives -l 100%FREE vg_

# Creacion de fs y montaje.
Creo y modifico los flags para que no haga fsck automaticamente con tune2fs.

[root@plweb1 ~]# mkfs.ext4 /dev/mapper/vg_-lv_opt_ar
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=8 blocks, Stripe width=0 blocks
469762048 inodes, 1879048192 blocks
93952409 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
57344 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

Writing inode tables: done                          

Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 23 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@plweb1 ~]# tune2fs -c -1 /dev/vg_/lv_opt_ar
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
[root@plweb1 ~]# tune2fs -i -1 /dev/vg_/lv_opt_ar
tune2fs 1.41.12 (17-May-2010)
Setting interval between checks to 18446744073709465216 seconds
[root@plweb1 ~]#

[root@plweb1 ~]# mkfs.ext4 /dev/mapper/vg_-lv_opt_ar_archives

mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=8 blocks, Stripe width=0 blocks
134217728 inodes, 536852480 blocks
26842624 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000

Writing inode tables: done                          

Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 25 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@plweb1 ~]# tune2fs -c -1 /dev/vg_/lv_opt_ar_archives  
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
[root@plweb1 ~]# tune2fs -i -1 /dev/vg_/lv_opt_ar_archives
tune2fs 1.41.12 (17-May-2010)
Setting interval between checks to 18446744073709465216 seconds


Monto y agrego al /etc/fstab
[root@plweb1 ~]# mkdir /opt/ar
[root@plweb1 ~]# mount /dev/vg_/lv_opt_ar /opt/ar
[root@plweb1 ~]# mkdir /opt/ar/archives
[root@plweb1 ~]# mount /dev/vg_/lv_opt_ar_archives /opt/ar/archives

Agrego las siguientes lineas al /etc/fstab
/dev/mapper/vg_-lv_opt_ar                 /opt/ar   ext4    defaults        1 2
/dev/mapper/vg_-lv_opt_ar_archives        /opt/ar/archives  ext4    defaults        1 2