root@192.168.10.103's password:
Last login: Thu Jun 26 05:17:21 2014 from 192.168.10.1
centos2[root /root]#
centos2[root /root]#
centos2[root /root]# mdadm --detail --scan
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=0.90 UUID=5a088ee8:3b9d3abe:da1c49f8:7244bc72
centos2[root /root]# dumpe2fs /dev/md0
dumpe2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 7265ca1f-1561-459a-ad0a-cd5aa9831103
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal resize_inode dir_index filetype sparse_super large_file
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 131072
Block count: 262080
Reserved block count: 13104
Free blocks: 253514
Free inodes: 131040
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 63
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Thu Jun 26 05:41:32 2014
Last mount time: Thu Jun 26 05:41:54 2014
Last write time: Thu Jun 26 05:45:43 2014
Mount count: 1
Maximum mount count: 27
Last checked: Thu Jun 26 05:41:32 2014
Check interval: 15552000 (6 months)
Next check after: Tue Dec 23 05:41:32 2014
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 1e1668f9-2b19-4495-9bd2-8576edfb5f0c
Journal backup: inode blocks
Journal size: 16M
Group 0: (Blocks 0-32767)
Primary superblock at 0, Group descriptors at 1-1
Reserved GDT blocks at 2-64
Block bitmap at 65 (+65), Inode bitmap at 66 (+66)
Inode table at 67-578 (+67)
28081 free blocks, 16373 free inodes, 2 directories
Free blocks: 4687-32767
Free inodes: 12-16384
Group 1: (Blocks 32768-65535)
Backup superblock at 32768, Group descriptors at 32769-32769
Reserved GDT blocks at 32770-32832
Block bitmap at 32833 (+65), Inode bitmap at 32834 (+66)
Inode table at 32835-33346 (+67)
32186 free blocks, 16381 free inodes, 3 directories
Free blocks: 33347-36863, 36865-45055, 45057-47103, 47105-65535
Free inodes: 16388-32768
Group 2: (Blocks 65536-98303)
Block bitmap at 65536 (+0), Inode bitmap at 65537 (+1)
Inode table at 65538-66049 (+2)
32251 free blocks, 16381 free inodes, 3 directories
Free blocks: 66050-77823, 77825-79871, 79873-94207, 94209-98303
Free inodes: 32772-49152
Group 3: (Blocks 98304-131071)
Backup superblock at 98304, Group descriptors at 98305-98305
Reserved GDT blocks at 98306-98368
Block bitmap at 98369 (+65), Inode bitmap at 98370 (+66)
Inode table at 98371-98882 (+67)
32186 free blocks, 16381 free inodes, 3 directories
Free blocks: 98883-108543, 108546-112639, 112641-131071
Free inodes: 49156-65536
Group 4: (Blocks 131072-163839)
Block bitmap at 131072 (+0), Inode bitmap at 131073 (+1)
Inode table at 131074-131585 (+2)
32251 free blocks, 16381 free inodes, 3 directories
Free blocks: 131586-141311, 141313-143359, 143362-163839
Free inodes: 65540-81920
Group 5: (Blocks 163840-196607)
Backup superblock at 163840, Group descriptors at 163841-163841
Reserved GDT blocks at 163842-163904
Block bitmap at 163905 (+65), Inode bitmap at 163906 (+66)
Inode table at 163907-164418 (+67)
32186 free blocks, 16381 free inodes, 3 directories
Free blocks: 164419-174079, 174081-176127, 176130-196607
Free inodes: 81924-98304
Group 6: (Blocks 196608-229375)
Block bitmap at 196608 (+0), Inode bitmap at 196609 (+1)
Inode table at 196610-197121 (+2)
32251 free blocks, 16381 free inodes, 3 directories
Free blocks: 197122-206847, 206849-208895, 208898-229375
Free inodes: 98308-114688
Group 7: (Blocks 229376-262079)
Backup superblock at 229376, Group descriptors at 229377-229377
Reserved GDT blocks at 229378-229440
Block bitmap at 229441 (+65), Inode bitmap at 229442 (+66)
Inode table at 229443-229954 (+67)
32122 free blocks, 16381 free inodes, 3 directories
Free blocks: 229955-241663, 241665-243711, 243714-262079
Free inodes: 114692-131072
centos2[root /root]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdd1[1] sdc1[0]
1048320 blocks 64k chunks
unused devices: <none>
centos2[root /root]# cd /dev
centos2[root /dev]# ls sd?
sda sdb sdc sdd sde sdf sdg
centos2[root /dev]# fdisk /dve/sde
Unable to open /dve/sde
centos2[root /dev]# fdisk /dev/sde
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-512, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-512, default 512):
Using default value 512
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sde: 536 MB, 536870912 bytes
64 heads, 32 sectors/track, 512 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 512 524272 fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
centos2[root /dev]# fdisk /dev/sdf
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-512, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-512, default 512):
Using default value 512
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
centos2[root /dev]# ls sd?
sda sdb sdc sdd sde sdf sdg
centos2[root /dev]# fdisk -l sde sdf | grep raid
sde1 1 512 524272 fd Linux raid autodetect
sdf1 1 512 524272 fd Linux raid autodetect
centos2[root /dev]# fdisk -l sde sdf | grep -i mb
Disk sde: 536 MB, 536870912 bytes
Disk sdf: 536 MB, 536870912 bytes
centos2[root /dev]# ls md*
md0
centos2[root /dev]# ls md1
ls: md1: 그런 파일이나 디렉토리가 없음
centos2[root /dev]# mknod md1 b 9 1
centos2[root /dev]# ls -l md1
brw-r--r-- 1 root root 9, 1 6월 27 03:09 md1
centos2[root /dev]# mdadm --create /dev/md1 --level=1 --radi-devices=2 sde1 sdf1
mdadm: unrecognized option `--radi-devices=2'
Usage: mdadm --help
for help
centos2[root /dev]# mdadm --create /dev/md1 --level=1 --raid-devices=2 sde1 sdf1
mdadm: array /dev/md1 started.
centos2[root /dev]# mdadm --detail --scan -v
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=0.90 UUID=5a088ee8:3b9d3abe:da1c49f8:7244bc72
devices=/dev/sdc1,/dev/sdd1
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=8dfe63aa:4909d365:3e9e8d0e:331eddd9
devices=/dev/sde1,/dev/sdf1
centos2[root /dev]# cat /proc//mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sdf1[1] sde1[0]
524160 blocks [2/2] [UU]
md0 : active raid0 sdd1[1] sdc1[0]
1048320 blocks 64k chunks
unused devices: <none>
centos2[root /dev]# mkfs -t ext3 /dev/sde1
mke2fs 1.39 (29-May-2006)
/dev/sde1 is apparently in use by the system; will not make a filesystem here!
centos2[root /dev]# 포멧 안된다.레이드 풀리기 전까지..
-bash: 포멧: command not found
centos2[root /dev]# mkfs -t ext3 /dev/md1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
131072 inodes, 524160 blocks
26208 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
64 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
centos2[root /dev]#
centos2[root /dev]# mkdir /raid1
centos2[root /dev]# mount /dev/md1 /raid1
centos2[root /dev]# 레이드5 구성
-bash: 레이드5: command not found
centos2[root /dev]# cat /proc//mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sdf1[1] sde1[0]
524160 blocks [2/2] [UU]
md0 : active raid0 sdd1[1] sdc1[0]
1048320 blocks 64k chunks
unused devices: <none>
centos2[root /dev]# 디스크 1개 빼고 모두 사용중 ㅠㅠ 구성 못함.
-bash: 디스크: command not found
centos2[root /dev]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 14G 3.7G 8.7G 30% /
/dev/sdb1 494M 11M 458M 3% /home
tmpfs 394M 0 394M 0% /dev/shm
/dev/md1 496M 11M 460M 3% /raid1
centos2[root /dev]# 레이드 제거 하자
-bash: 레이드: command not found
centos2[root /dev]# mdadm --stop /dev/md5
mdadm: error opening /dev/md5: No such file or directory
centos2[root /dev]# umount /dev/md1
centos2[root /dev]# mdadm --stop /dev/md5
mdadm: error opening /dev/md5: No such file or directory
centos2[root /dev]# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
centos2[root /dev]# mdadm --remove /dev/md1
centos2[root /dev]# mdadm --detail -scan
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=0.90 UUID=5a088ee8:3b9d3abe:da1c49f8:7244bc72
centos2[root /dev]# cat /proc//mdstat
Personalities : [raid0] [raid1]
md0 : active raid0 sdd1[1] sdc1[0]
1048320 blocks 64k chunks
unused devices: <none>
centos2[root /dev]# 제부팅 하면 레이드 삭제 한게 다시 나타난다.
centos2[root /dev]# 이유는 슈퍼블럭에 저장 되어 있어서.
centos2[root /dev]# mdadm --zero-superblock /dev/sde1
centos2[root /dev]# ('') 꼭 해주어야 한다..
-bash: syntax error near unexpected token `꼭'
centos2[root /dev]# umount /dev/md0
umount: /dev/md0: not mounted
centos2[root /dev]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
centos2[root /dev]# mdadm --remove /dev/md0
centos2[root /dev]# mdadm --zero-superblock /dev/sdc1 /dev/sdd1
centos2[root /dev]# mdadm --zero-superblock /dev/sdf1
centos2[root /dev]# ls sd?
sda sdb sdc sdd sde sdf sdg
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]# mknod md5 b 9 5
centos2[root /dev]# ls -l md*
brw-r----- 1 root disk 9, 0 6월 27 02:53 md0
brw-r----- 1 root disk 9, 1 7월 1 19:53 md1
brw-r--r-- 1 root root 9, 5 7월 1 20:21 md5
centos2[root /dev]# ls -l md5
brw-r--r-- 1 root root 9, 5 7월 1 20:21 md5
centos2[root /dev]# ls sd?
sda sdb sdc sdd sde sdf sdg
centos2[root /dev]# c d e 를 묵는다.
centos2[root /dev]#
centos2[root /dev]# fdisk -l sdc sdd sde | grep -i raid
sdc1 1 512 524272 fd Linux raid autodetect
sdd1 1 512 524272 fd Linux raid autodetect
sde1 1 512 524272 fd Linux raid autodetect
centos2[root /dev]# 레이드 볼륨으로 되어 있당.
centos2[root /dev]#
centos2[root /dev]# mdadm --create /dev/md5 --level=5 --raid-devices=3 sdc1 sdd1 ade1
mdadm: sdc1 appears to contain an ext2fs file system
size=1048320K mtime=Thu Jun 26 05:41:54 2014
mdadm: Cannot open ade1: No such file or directory
mdadm: create aborted
centos2[root /dev]# mdadm --create /dev/md5 --level=5 --raid-devices=3 sdc1 sdd1 sde1
mdadm: sdc1 appears to contain an ext2fs file system
size=1048320K mtime=Thu Jun 26 05:41:54 2014
mdadm: sde1 appears to contain an ext2fs file system
size=524160K mtime=Tue Jul 1 19:53:40 2014
Continue creating array? y
mdadm: array /dev/md5 started.
centos2[root /dev]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sde1[3] sdd1[1] sdc1[0]
1048320 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[=========>...........] recovery = 49.8% (261760/524160) finish=0.1min speed=32720K/sec
unused devices: <none>
centos2[root /dev]# mdadm --detail sscaan -v
mdadm: cannot open sscaan: No such file or directory
centos2[root /dev]# mdadm --detail --scan -v
ARRAY /dev/md5 level=raid5 num-devices=3 metadata=0.90 UUID=c4db2fc4:9f18aad3:d665932a:a49db583
devices=/dev/sdc1,/dev/sdd1,/dev/sde1
centos2[root /dev]# mdadm --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Tue Jul 1 20:23:54 2014
Raid Level : raid5
Array Size : 1048320 (1023.92 MiB 1073.48 MB)
Used Dev Size : 524160 (511.96 MiB 536.74 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Tue Jul 1 20:24:12 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : c4db2fc4:9f18aad3:d665932a:a49db583
Events : 0.4
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
2 8 65 2 active sync /dev/sde1
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]# mkfs -t ext3 /dev/md5
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
131072 inodes, 262080 blocks
13104 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
centos2[root /dev]# mkdir /webhome
centos2[root /dev]# mount /dev/md5 /webhome
centos2[root /dev]# df -h | tail -1
/dev/md5 1008M 18M 940M 2% /webhome
centos2[root /dev]#
centos2[root /dev]# cd ~
centos2[root /root]# useradd -d /webhome/webuser1 webuser1
^[[Acentos2[root /root]# useradd -d /webhome/webuser2 webuser2
centos2[root /root]# useradd -d /webhome/webuser3 webuser3
centos2[root /root]# useradd -d /webhome/webuser4 webuser4
centos2[root /root]# useradd -d /webhome/webuser5 webuser5
centos2[root /root]#
centos2[root /root]#
centos2[root /root]# mkdir /webhome/test
centos2[root /root]# cp /etc/*.conf /webhome/test
centos2[root /root]# df -h | tail -1
/dev/md5 1008M 18M 939M 2% /webhome
centos2[root /root]# ls /webhome
lost+found test webuser1 webuser2 webuser3 webuser4 webuser5
centos2[root /root]# su - webuser3
centos2[root /]#
centos2[root /]# 하드 제거 했당..
centos2[root /]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[2] sdc1[0]
1048320 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
unused devices: <none>
centos2[root /]# ('')디스크가 없다고 표시됨.
centos2[root /]# mdadm --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Tue Jul 1 20:23:54 2014
Raid Level : raid5
Array Size : 1048320 (1023.92 MiB 1073.48 MB)
Used Dev Size : 524160 (511.96 MiB 536.74 MB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Tue Jul 1 20:34:05 2014
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : c4db2fc4:9f18aad3:d665932a:a49db583
Events : 0.4
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 0 0 1 removed
2 8 49 2 active sync /dev/sdd1
centos2[root /]# ('')removed 써져 있따.
centos2[root /]# cd /dev/
centos2[root /dev]# ls sd?
sda sdb sdc sdd sde sdf
centos2[root /dev]# sde 를 중간에 끼워 넣기 하자.
centos2[root /dev]# fdisk -l /dev/sde
Disk /dev/sde: 536 MB, 536870912 bytes
64 heads, 32 sectors/track, 512 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 512 524272 fd Linux raid autodetect
centos2[root /dev]# fd 로 되어 있따. 꼽아도 된다.
centos2[root /dev]# mdadm --manage /dev/md5 --add /dev/sde1
mdadm: added /dev/sde1
---------------------------------------------------
---------------------------------------------------
---------------------------------------------------
다른계정으로 % 확인하기
centos2[root /root]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde1[3] sdd1[2] sdc1[0]
1048320 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
[============>........] recovery = 62.3% (326916/524160) finish=0.1min speed=23351K/sec
unused devices: <none>
centos2[root /root]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde1[3] sdd1[2] sdc1[0]
1048320 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
[============>........] recovery = 64.8% (340096/524160) finish=0.1min speed=22673K/sec
unused devices: <none>
centos2[root /root]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde1[3] sdd1[2] sdc1[0]
1048320 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
[=============>.......] recovery = 68.3% (359304/524160) finish=0.1min speed=22456K/sec
unused devices: <none>
centos2[root /root]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde1[3] sdd1[2] sdc1[0]
1048320 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
[=================>...] recovery = 88.0% (462080/524160) finish=0.0min speed=22003K/sec
---------------------------------------------------
---------------------------------------------------
---------------------------------------------------
centos2[root /dev]# mount /dev/md5
mount: can't find /dev/md5 in /etc/fstab or /etc/mtab
centos2[root /dev]# mount /dev/md5 /webhome/
centos2[root /dev]# cd /webhome
centos2[root /webhome]# ls
lost+found test webuser1 webuser2 webuser3 webuser4 webuser5
centos2[root /webhome]#
레이드 10도 했는데 놓쳤음..연습해야 함..
Quota 설정 하기..
centos2[root /root]# alias df
alias df='df -h'
centos2[root /root]# \df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 3525120 163825 3361295 5% /
/dev/sdb1 130560 49 130511 1% /home
tmpfs 100864 1 100863 1% /dev/shm
centos2[root /root]#
centos2[root /root]#
centos2[root /root]# rpm -q quota
quota-3.13-8.el5
centos2[root /root]# cd /dev
You have new mail in /var/spool/mail/root
centos2[root /dev]# ls
MAKEDEV fd0u1760 md0 ram6 stdin tty35 tty8
X0R fd0u1840 md5 ram7 stdout tty36 tty9
adsp fd0u1920 mem ram8 systty tty37 ttyS0
agpgart fd0u360 midi ram9 tty tty38 ttyS1
audio fd0u720 mixer ramdisk tty0 tty39 ttyS2
autofs fd0u800 net random tty1 tty4 ttyS3
bus fd0u820 network_latency rawctl tty10 tty40 urandom
cdrom fd0u830 network_throughput root tty11 tty41 usbdev1.1_ep81
cdrom-hdc floppy null rtc tty12 tty42 usbdev2.1_ep00
cdrw floppy-fd0 nvram sda tty13 tty43 usbdev2.1_ep81
cdrw-hdc full oldmem sda1 tty14 tty44 usbdev2.2_ep00
cdwriter gpmctl par0 sda2 tty15 tty45 usbdev2.2_ep81
cdwriter-hdc hda parport0 sdb tty16 tty46 usbdev2.2_ep82
console hdb parport1 sdb1 tty17 tty47 usbdev2.3_ep00
core hdc parport2 sdc tty18 tty48 usbdev2.3_ep81
cpu_dma_latency hidraw0 parport3 sdc1 tty19 tty49 vcs
disk hidraw1 port sdd tty2 tty5 vcs2
dmmidi hpet ppp sdd1 tty20 tty50 vcs3
dsp initctl ptmx sde tty21 tty51 vcs4
dvd input pts sde1 tty22 tty52 vcs5
dvd-hdc js0 ram sdf tty23 tty53 vcs6
dvdrw kmsg ram0 sequencer tty24 tty54 vcs7
dvdrw-hdc log ram1 sequencer2 tty25 tty55 vcs8
dvdwriter loop0 ram10 sg0 tty26 tty56 vcsa
dvdwriter-hdc loop1 ram11 sg1 tty27 tty57 vcsa2
fd loop2 ram12 sg2 tty28 tty58 vcsa3
fd0 loop3 ram13 sg3 tty29 tty59 vcsa4
fd0u1040 loop4 ram14 sg4 tty3 tty6 vcsa5
fd0u1120 loop5 ram15 sg5 tty30 tty60 vcsa6
fd0u1440 loop6 ram2 shm tty31 tty61 vcsa7
fd0u1680 loop7 ram3 snapshot tty32 tty62 vcsa8
fd0u1722 lp0 ram4 snd tty33 tty63 zero
fd0u1743 mapper ram5 stderr tty34 tty7
centos2[root /dev]# ls sd?
sda sdb sdc sdd sde sdf
centos2[root /dev]# mkdir /qthome
centos2[root /dev]# fdisk /dev/sdc1
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): p
Disk /dev/sdc1: 536 MB, 536854528 bytes
64 heads, 32 sectors/track, 511 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Command (m for help): t
No partition is defined yet!
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-511, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-511, default 511):
Using default value 511
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 83
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: 부적절한 인수.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]# mkfs -t ext3 /dev/sdc1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
131072 inodes, 524272 blocks
26213 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
64 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
centos2[root /dev]# mount /dev/sdc1 /qthome/
centos2[root /dev]# useradd -d /qthome/freeuser1 freeuser1
^[[Acentos2[root /dev]# useradd -d /qthome/freeuser2 freeuser2
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]# useradd -d /qthome/freeuser3 freeuser3
centos2[root /dev]#
centos2[root /dev]#
centos2[root /dev]# useradd -d /qthome/freeuser4 freeuser4
centos2[root /dev]# useradd -d /qthome/freeuser5 freeuser5
centos2[root /dev]# ls /qthome/
freeuser1 freeuser2 freeuser3 freeuser4 freeuser5 lost+found
centos2[root /dev]# cd /etc
centos2[root /etc]# vi fstab
centos2[root /etc]# tail -1 fstab
/dev/sdc1 /qthome ext3 defaults,usrquota 1 2
centos2[root /etc]# mount | tail -1
/dev/sdc1 on /qthome type ext3 (rw)
centos2[root /etc]# mount -o remount /qthome
centos2[root /etc]# mount | tail -1
/dev/sdc1 on /qthome type ext3 (rw,usrquota)
centos2[root /etc]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 14G 3.8G 8.6G 31% /
/dev/sdb1 494M 11M 458M 3% /home
tmpfs 394M 0 394M 0% /dev/shm
/dev/sdc1 496M 11M 460M 3% /qthome
centos2[root /etc]# cd /qthome/
centos2[root /qthome]# outch aquota.user
-bash: outch: command not found
centos2[root /qthome]# touch aquota.user
centos2[root /qthome]# quota
quota: Quota file not found or has wrong format.
centos2[root /qthome]# quotacheck -v /qthome/
quotacheck: WARNING - Quotafile /qthome/aquota.user was probably truncated. Can't save quota settings...
quotacheck: Scanning /dev/sdc1 [/qthome] quotacheck: Old group file not found. Usage will not be substracted.
done
quotacheck: Checked 33 directories and 33 files
centos2[root /qthome]# quotacheck -v /qthome/
quotacheck: Scanning /dev/sdc1 [/qthome] quotacheck: Old group file not found. Usage will not be substracted.
done
quotacheck: Checked 33 directories and 33 files
centos2[root /qthome]# ls -l aquota.user
-rw-r--r-- 1 root root 8192 7월 1 22:04 aquota.user
centos2[root /qthome]#
centos2[root /qthome]#
centos2[root /qthome]# file aquota.userer
aquota.userer: ERROR: cannot open `aquota.userer' (No such file or directory)
centos2[root /qthome]# file aquota.user
aquota.user: data
centos2[root /qthome]# quotacheck -v /qthome/
quotacheck: Scanning /dev/sdc1 [/qthome] quotacheck: Old group file not found. Usage will not be substracted.
done
quotacheck: Checked 33 directories and 33 files
centos2[root /qthome]# quotaon -v /qthome/
/dev/sdc1 [/qthome]: user quotas turned on
centos2[root /qthome]#
centos2[root /qthome]# file aquota.userer
aquota.userer: ERROR: cannot open `aquota.userer' (No such file or directory)
centos2[root /qthome]# file aquota.user
aquota.user: data
centos2[root /qthome]# quotacheck -v /qthome/
quotacheck: Scanning /dev/sdc1 [/qthome] quotacheck: Old group file not found. Usage will not b e substracted.
done
quotacheck: Checked 33 directories and 33 files
centos2[root /qthome]# quotaon -v /qthome/
/dev/sdc1 [/qthome]: user quotas turned on
centos2[root /qthome]#
centos2[root /qthome]# edquota -u freeuser1
centos2[root /qthome]# repquota /qthome/
*** Report for user quotas on device /dev/sdc1
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 10544 0 0 4 0 0
freeuser1 -- 12 5000 10000 12 0 0
freeuser2 -- 12 0 0 12 0 0
freeuser3 -- 12 0 0 12 0 0
freeuser4 -- 12 0 0 12 0 0
freeuser5 -- 12 0 0 12 0 0
centos2[root /qthome]#
centos2[root /qthome]# file aquota.userer
aquota.userer: ERROR: cannot open `aquota.userer' (No such file or directory)
centos2[root /qthome]# file aquota.user
aquota.user: data
centos2[root /qthome]# quotacheck -v /qthome/
quotacheck: Scanning /dev/sdc1 [/qthome] quotacheck: Old group file not found. Usage will not b e substracted.
done
quotacheck: Checked 33 directories and 33 files
centos2[root /qthome]# quotaon -v /qthome/
/dev/sdc1 [/qthome]: user quotas turned on
centos2[root /qthome]#
centos2[root /qthome]# edquota -u freeuser1
centos2[root /qthome]# repquota /qthome/
*** Report for user quotas on device /dev/sdc1
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 10544 0 0 4 0 0
freeuser1 -- 12 5000 10000 12 0 0
freeuser2 -- 12 0 0 12 0 0
freeuser3 -- 12 0 0 12 0 0
freeuser4 -- 12 0 0 12 0 0
freeuser5 -- 12 0 0 12 0 0
centos2[root /qthome]#
centos2[root /qthome]# su - freeuser1
[freeuser1@centos2 ~]$ mkdir test
[freeuser1@centos2 ~]$ cd test
[freeuser1@centos2 test]$ cp /bin/khs .
cp: cannot stat `/bin/khs': 그런 파일이나 디렉토리가 없음
[freeuser1@centos2 test]$ cp /bin/ksh .
[freeuser1@centos2 test]$ cp /bin/ksh 1
[freeuser1@centos2 test]$ cp /bin/ksh 2
[freeuser1@centos2 test]$ cp /bin/ksh 3
sdc1: warning, user block quota exceeded.
[freeuser1@centos2 test]$ cd /qthome/
[freeuser1@centos2 qthome]$ repquota /qthome/
-bash: repquota: command not found
[freeuser1@centos2 qthome]$ cp /bin/ksh 4
cp: cannot create regular file `4': 허가 거부됨
[freeuser1@centos2 qthome]$ cd /test
-bash: cd: /test: 그런 파일이나 디렉토리가 없음
[freeuser1@centos2 qthome]$ cd test
-bash: cd: test: 그런 파일이나 디렉토리가 없음
[freeuser1@centos2 qthome]$ cd ./test
-bash: cd: ./test: 그런 파일이나 디렉토리가 없음
[freeuser1@centos2 qthome]$ cd ~
[freeuser1@centos2 ~]$ cd /test
-bash: cd: /test: 그런 파일이나 디렉토리가 없음
[freeuser1@centos2 ~]$ cd test
[freeuser1@centos2 test]$ cp /bin/ksh 5
[freeuser1@centos2 test]$ cp /bin/ksh 6
[freeuser1@centos2 test]$ cp /bin/ksh 7
[freeuser1@centos2 test]$ cp /bin/ksh 8
sdc1: write failed, user block limit reached.
cp: writing `8': 디스크 할당량이 초과됨
[freeuser1@centos2 test]$ exit
logout
centos2[root /qthome]# repquota /qthome/
*** Report for user quotas on device /dev/sdc1
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 10544 0 0 4 0 0
freeuser1 +- 10000 5000 10000 6days 22 0 0
freeuser2 -- 12 0 0 12 0 0
freeuser3 -- 12 0 0 12 0 0
freeuser4 -- 12 0 0 12 0 0
freeuser5 -- 12 0 0 12 0 0
centos2[root /qthome]# 용량이 거의 찾다..
-bash: 용량이: command not found
centos2[root /qthome]# 6일이 지나면 이전에는 0파이트 파일이나 폴더를 만들수 있었느나 안하면 암껏도 못만든다.
5매가 까지 사용하고 최대 10매가까지 사용할수 있다..5매가가 넘으면 날짜 경고가 뜬다.
'Linux' 카테고리의 다른 글
로그 수집할때 월마다 폴더 만들어서 저장하기 (0) | 2023.04.11 |
---|---|
centos 9 error "signature not supported hash algorithm sha1 not available" (0) | 2022.08.04 |
18일 레이드 구성 (0) | 2014.06.30 |
17일차 ext2 ext3 차이, 파일세스템 및 복구 (0) | 2014.06.27 |
15일 자동마운트 (0) | 2014.06.25 |