Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 badone joined #gluster
00:16 curratore joined #gluster
00:29 farhorizon joined #gluster
00:32 dlambrig joined #gluster
01:16 calavera joined #gluster
01:17 EinstCrazy joined #gluster
01:17 jotun joined #gluster
01:24 zhangjn joined #gluster
01:29 Lee1092 joined #gluster
01:30 dlambrig joined #gluster
01:50 EinstCrazy joined #gluster
01:55 EinstCrazy joined #gluster
02:00 EinstCra_ joined #gluster
02:03 rcampbel3 joined #gluster
02:06 JesperA joined #gluster
02:06 dlambrig joined #gluster
02:37 harish joined #gluster
02:46 bharata-rao joined #gluster
02:46 skoduri joined #gluster
02:46 edong23 joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 harish joined #gluster
02:53 davidbitton joined #gluster
03:04 haomaiwang joined #gluster
03:06 davidbitton joined #gluster
03:24 sakshi joined #gluster
03:24 badone joined #gluster
03:29 kanagaraj joined #gluster
03:41 kanagaraj joined #gluster
03:42 zhangjn joined #gluster
03:43 overclk joined #gluster
03:43 RameshN_ joined #gluster
03:44 EinstCrazy joined #gluster
03:48 Rapture joined #gluster
03:49 atinm joined #gluster
03:53 [o__o] joined #gluster
03:53 badone joined #gluster
03:53 rcampbel3 joined #gluster
03:59 mowntan joined #gluster
04:00 Manikandan joined #gluster
04:08 farhorizon joined #gluster
04:18 gem joined #gluster
04:19 cmdrmoozy joined #gluster
04:21 shubhendu joined #gluster
04:26 kotreshhr joined #gluster
04:34 davidbitton joined #gluster
04:40 nehar joined #gluster
04:44 Bhaskarakiran_ joined #gluster
04:46 haomaiwang joined #gluster
04:47 RameshN_ joined #gluster
04:53 cmdrmoozy joined #gluster
04:58 gowtham joined #gluster
05:01 ndarshan joined #gluster
05:05 pppp joined #gluster
05:08 overclk joined #gluster
05:13 samikshan joined #gluster
05:14 karthikfff joined #gluster
05:16 kdhananjay joined #gluster
05:19 aravindavk joined #gluster
05:22 anil joined #gluster
05:23 skoduri joined #gluster
05:25 Apeksha joined #gluster
05:31 rcampbel3 joined #gluster
05:35 ppai joined #gluster
05:37 jiffin joined #gluster
05:38 nangthang joined #gluster
05:48 ovaistar_ joined #gluster
05:51 ramky joined #gluster
05:52 poornimag joined #gluster
05:54 hchiramm_ joined #gluster
05:58 itisravi joined #gluster
06:00 cmdrmoozy joined #gluster
06:03 kotreshhr left #gluster
06:07 karnan joined #gluster
06:11 rafi joined #gluster
06:13 vimal joined #gluster
06:22 ramky joined #gluster
06:22 nangthang joined #gluster
06:26 zhangjn joined #gluster
06:30 ashiq joined #gluster
06:31 ashiq joined #gluster
06:33 ovaistariq joined #gluster
06:36 skoduri joined #gluster
06:46 mobaer joined #gluster
06:46 eljrax joined #gluster
06:47 lord4163 joined #gluster
06:50 spalai joined #gluster
06:51 Manikandan_wfh joined #gluster
06:57 vmallika joined #gluster
06:57 skoduri joined #gluster
06:58 haomaiw__ joined #gluster
06:59 harish_ joined #gluster
07:01 64MAAT2FQ joined #gluster
07:05 ovaistar_ joined #gluster
07:10 Saravanakmr joined #gluster
07:13 inodb joined #gluster
07:16 mhulsman joined #gluster
07:16 harish_ joined #gluster
07:22 jtux joined #gluster
07:22 zhangjn joined #gluster
07:23 ramky joined #gluster
07:28 arcolife joined #gluster
07:29 mobaer joined #gluster
07:31 ramky joined #gluster
07:34 atalur joined #gluster
07:38 tswartz joined #gluster
07:40 [Enrico] joined #gluster
07:45 poornimag joined #gluster
07:45 jvandewege joined #gluster
07:51 ovaistariq joined #gluster
07:59 ramky joined #gluster
08:01 haomaiwang joined #gluster
08:02 jtux joined #gluster
08:12 ramteid joined #gluster
08:25 jvandewege joined #gluster
08:31 ivan_rossi joined #gluster
08:36 jwd joined #gluster
08:37 ramky joined #gluster
08:49 itisravi joined #gluster
09:00 spalai joined #gluster
09:00 curratore joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 karnan joined #gluster
09:14 ctria joined #gluster
09:24 Saravanakmr joined #gluster
09:31 jiffin1 joined #gluster
09:37 ramky_ joined #gluster
09:38 kdhananjay joined #gluster
09:50 MrAbaddon joined #gluster
09:57 spalai joined #gluster
09:58 Slashman joined #gluster
10:01 EinstCra_ joined #gluster
10:01 haomaiwa_ joined #gluster
10:14 samppah joined #gluster
10:23 mobaer joined #gluster
10:29 poornimag joined #gluster
10:32 bharata_ joined #gluster
10:33 Saravanakmr joined #gluster
10:33 overclk joined #gluster
10:38 ramky__ joined #gluster
10:47 [Enrico] joined #gluster
10:52 harish_ joined #gluster
10:57 markd_ joined #gluster
11:00 sakshi joined #gluster
11:01 haomaiwa_ joined #gluster
11:23 mobaer joined #gluster
11:26 jiffin1 joined #gluster
11:30 rafi1 joined #gluster
11:31 EinstCrazy joined #gluster
11:37 kdhananjay1 joined #gluster
11:38 ramky_ joined #gluster
11:54 MrAbaddon joined #gluster
11:54 Pupeno joined #gluster
11:54 Pupeno joined #gluster
12:01 haomaiwa_ joined #gluster
12:06 jdarcy joined #gluster
12:09 nangthang joined #gluster
12:10 curratore JoeJulian: finally I tested to use the slave volume without docker from a "dockerized" master and it worked at first try, but without push-pem
12:13 curratore so thanks for your help JoeJulian hagarth ;)
12:14 rafi joined #gluster
12:17 dlambrig joined #gluster
12:20 ppai joined #gluster
12:39 ramky__ joined #gluster
12:39 unclemarc joined #gluster
12:44 shubhendu joined #gluster
12:45 karnan joined #gluster
12:49 b0p joined #gluster
12:56 Peppard joined #gluster
12:56 cliluw joined #gluster
12:59 ppai joined #gluster
12:59 davidhadas_ joined #gluster
13:01 haomaiwang joined #gluster
13:03 jdarcy joined #gluster
13:14 Ethical2ak joined #gluster
13:26 spardhas joined #gluster
13:28 spardhas hi all
13:29 spardhas can you tell me if i can implement sharding storage in my VM storage production ?
13:29 spardhas i see in blog that the shard is stable since december 24
13:31 spardhas but i'm not sur if i can
13:31 spardhas i'm suspicious =)
13:32 farhorizon joined #gluster
13:35 mobaer joined #gluster
13:35 d0nn1e joined #gluster
13:35 haomaiwang joined #gluster
13:40 ramky_ joined #gluster
13:41 mhulsman joined #gluster
13:46 jwang_ joined #gluster
13:46 davidhadas__ joined #gluster
13:47 farhoriz_ joined #gluster
13:48 Akee1 joined #gluster
13:49 muneerse2 joined #gluster
13:49 mrrrgn_ joined #gluster
13:49 morse_ joined #gluster
13:50 natgeorg joined #gluster
13:51 samppah_ joined #gluster
13:51 wiza_ joined #gluster
13:51 dblack_ joined #gluster
13:51 squeakyneb_ joined #gluster
13:51 PaulePan1er joined #gluster
13:51 saltsa_ joined #gluster
13:51 ndevos_ joined #gluster
13:51 argonius_ joined #gluster
13:51 obnox_ joined #gluster
13:51 kblin_ joined #gluster
13:51 Nuxr0 joined #gluster
13:51 the-me_ joined #gluster
13:52 partner_ joined #gluster
13:52 unicky- joined #gluster
13:52 monotek joined #gluster
13:55 yoavz- joined #gluster
13:55 tg2_ joined #gluster
13:55 JoeJulian_ joined #gluster
13:55 Iouns_ joined #gluster
13:55 d0nn1e_ joined #gluster
13:55 bfoster1 joined #gluster
13:55 kblin joined #gluster
13:56 DJCl34n joined #gluster
13:56 codex joined #gluster
13:57 sjohnsen joined #gluster
13:58 Bardack joined #gluster
13:58 telmich joined #gluster
13:58 telmich joined #gluster
14:00 Vaizki joined #gluster
14:00 samikshan joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 ndarshan joined #gluster
14:02 ctria joined #gluster
14:02 Apeksha joined #gluster
14:02 PsionTheory joined #gluster
14:02 The_Ball joined #gluster
14:02 Slashman joined #gluster
14:02 dmnchild joined #gluster
14:02 alghost joined #gluster
14:03 stopbyte joined #gluster
14:04 Humble joined #gluster
14:05 dgandhi joined #gluster
14:05 jtux joined #gluster
14:06 dgandhi joined #gluster
14:08 dgandhi joined #gluster
14:09 dgandhi joined #gluster
14:11 dgandhi joined #gluster
14:12 dgandhi joined #gluster
14:14 primusinterpares joined #gluster
14:14 gem joined #gluster
14:14 dgandhi joined #gluster
14:16 dgandhi joined #gluster
14:16 plarsen joined #gluster
14:17 dgandhi joined #gluster
14:18 dgandhi joined #gluster
14:19 dgandhi joined #gluster
14:21 dgandhi joined #gluster
14:22 dgandhi joined #gluster
14:22 dgandhi joined #gluster
14:24 dgandhi joined #gluster
14:25 dgandhi joined #gluster
14:28 dgandhi joined #gluster
14:30 dgandhi joined #gluster
14:31 dgandhi joined #gluster
14:32 DJClean joined #gluster
14:33 dgandhi joined #gluster
14:34 geekonek joined #gluster
14:35 dgandhi joined #gluster
14:38 dgandhi joined #gluster
14:38 patrick_g joined #gluster
14:39 dgandhi joined #gluster
14:40 dgandhi joined #gluster
14:41 patrick_g I have a four-node distributed replicated gluster volume (3.6.6) where the load spikes on one of the peers (I believe during a heal) and all mounts to the volume become unresponsive. Glusterfsd uses 800% CPU (maxed out) and load average is 14-20.
14:41 dgandhi joined #gluster
14:42 patrick_g Not sure if it's related, but the gluster brick log is full of this: [2016-01-20 09:48:32.687769] W [server-resolve.c:437:resolve_anonfd_simple] 0-server: inode for the gfid (15bb308d-702e-4a73-bd1c-13a4f010e579) is not found. anonymous fd creation failed
14:42 patrick_g always the same file. ~200,000 lines of log entries
14:43 patrick_g anyone seen behavior like this or have an idea as to what might be happening?
14:45 dgandhi joined #gluster
14:45 hagarth joined #gluster
14:46 raghu joined #gluster
14:47 dgandhi joined #gluster
14:49 dgandhi joined #gluster
14:50 dgandhi joined #gluster
14:52 dgandhi joined #gluster
14:53 dgandhi joined #gluster
14:54 mhulsman joined #gluster
14:54 dgandhi joined #gluster
14:55 mobaer joined #gluster
14:55 dgandhi joined #gluster
14:57 Ethical2ak @Patrick_g : If you do '' df -h ''
14:57 Ethical2ak is the command frozen ?
14:59 patrick_g no
14:59 patrick_g it functions
15:00 dgandhi joined #gluster
15:01 haomaiwang joined #gluster
15:01 Ethical2ak try to reassign the gfid on the broken node and then restart glusterd/glusterfsd
15:02 Ethical2ak setfattr -n trusted.gfid -v 0x00000000000000000000000000000001   /yourvolumepath
15:02 Ethical2ak setfattr -n trusted.glusterfs.volume-id -v 0x$(gluster volume info | grep ID | awk '{print $3}' | sed 's#-##g') /yourvolumepath
15:02 dgandhi joined #gluster
15:03 Ethical2ak service glusterd stop , service glusterfsd stop and then restart them all
15:03 patrick_g when you say restart them all, reboot all the nodes?
15:03 Ethical2ak nop
15:03 Ethical2ak only the service
15:03 patrick_g ok
15:03 Ethical2ak make sure that glusterd and glusterfsd and stopped properly
15:03 patrick_g you said stop. the services
15:03 dgandhi joined #gluster
15:04 Ethical2ak yes
15:04 Ethical2ak the gluster services
15:04 coredump joined #gluster
15:04 dgandhi joined #gluster
15:05 raghu joined #gluster
15:05 Ethical2ak After you could check with '' gluster volume status '' if all ports as been assign properly
15:06 patrick_g ---------------------------------------​---------------------------------------
15:06 patrick_g Brick gfsib01a.corvidtec.com:/data​/brick01a/homegfs49152Y3799
15:06 patrick_g Brick gfsib01b.corvidtec.com:/data​/brick01b/homegfs49152Y3880
15:06 glusterbot patrick_g: ---------------------------------------​-------------------------------------'s karma is now -12
15:06 patrick_g Brick gfsib01a.corvidtec.com:/data​/brick02a/homegfs49153Y3804
15:06 patrick_g Brick gfsib01b.corvidtec.com:/data​/brick02b/homegfs49153Y3885
15:06 patrick_g Brick gfsib02a.corvidtec.com:/data​/brick01a/homegfs49152Y3965
15:06 patrick_g Brick gfsib02b.corvidtec.com:/data​/brick01b/homegfs49152Y3951
15:06 patrick_g Brick gfsib02a.corvidtec.com:/data​/brick02a/homegfs49153Y3970
15:06 patrick_g Brick gfsib02b.corvidtec.com:/data​/brick02b/homegfs49153Y3956
15:06 patrick_g NFS Server on localhost2049Y3813
15:06 patrick_g Self-heal Daemon on localhostN/AY3818
15:06 patrick_g NFS Server on gfsib01b.corvidtec.com2049Y3894
15:06 patrick_g Self-heal Daemon on gfsib01b.corvidtec.comN/AY3899
15:06 patrick_g NFS Server on gfsib02a.corvidtec.com2049Y3979
15:06 patrick_g Self-heal Daemon on gfsib02a.corvidtec.comN/AY3984
15:06 patrick_g NFS Server on gfsib02b.corvidtec.com2049Y3965
15:07 patrick_g when you say the volume path, do you mean on the brick?
15:07 dgandhi joined #gluster
15:08 doekia joined #gluster
15:08 mhulsman joined #gluster
15:08 caveat- joined #gluster
15:09 hagarth joined #gluster
15:10 Ethical2ak Do you have something to test an upload on your broken node ?
15:10 muneerse joined #gluster
15:11 Ethical2ak yes
15:11 Ethical2ak The one who is in your fstab entry
15:14 rwheeler joined #gluster
15:16 patrick_g I can't decide if I should use the brick or the mount point. Let me post for you the fstab contents for the homegfs volume
15:16 nishanth joined #gluster
15:17 patrick_g /dev/mapper/vg01-lvol1  /data/brick01a          xfs     inode64,noatime 1 2
15:17 patrick_g /dev/mapper/vg02-lvol1  /data/brick02a          xfs     inode64,noatime 1 2
15:17 patrick_g gfsib01a.corvidtec.com:/homegfs  /homegfs             glusterfs       transport=tcp,_netdev,acl 0 0
15:17 patrick_g gfsib01a.corvidtec.com:/Software /optgfs/Software     glusterfs       transport=tcp,_netdev,acl 0 0
15:17 patrick_g gfsib01a.corvidtec.com:/Source   /optgfs/Source       glusterfs       transport=tcp,_netdev,acl 0 0
15:17 patrick_g there are two other additional volumes that aren't heavily used
15:18 vimal joined #gluster
15:20 coredump joined #gluster
15:22 Ethical2ak You should use the mount point
15:23 Ethical2ak but your setup seems ok
15:25 muneerse joined #gluster
15:27 patrick_g when you say the setup seems ok, do you still think resetting gfid and restarting glusterd may fix the issue?
15:27 patrick_g this is the current volume status output
15:27 patrick_g Brick gfsib01a.corvidtec.com:/data​/brick01a/homegfs49152Y3799
15:27 patrick_g Brick gfsib01b.corvidtec.com:/data​/brick01b/homegfs49152Y3880
15:27 patrick_g Brick gfsib01a.corvidtec.com:/data​/brick02a/homegfs49153Y3804
15:27 patrick_g Brick gfsib01b.corvidtec.com:/data​/brick02b/homegfs49153Y3885
15:27 patrick_g Brick gfsib02a.corvidtec.com:/data​/brick01a/homegfs49152Y3965
15:27 Ethical2ak Probably yes
15:27 patrick_g Brick gfsib02b.corvidtec.com:/data​/brick01b/homegfs49152Y3951
15:27 patrick_g Brick gfsib02a.corvidtec.com:/data​/brick02a/homegfs49153Y3970
15:27 patrick_g Brick gfsib02b.corvidtec.com:/data​/brick02b/homegfs49153Y3956
15:27 patrick_g NFS Server on localhost2049Y3813
15:27 patrick_g Self-heal Daemon on localhostN/AY3818
15:27 patrick_g NFS Server on gfsib01b.corvidtec.com2049Y3894
15:27 patrick_g Self-heal Daemon on gfsib01b.corvidtec.comN/AY3899
15:27 patrick_g NFS Server on gfsib02a.corvidtec.com2049Y3979
15:27 patrick_g Self-heal Daemon on gfsib02a.corvidtec.comN/AY3984
15:27 patrick_g NFS Server on gfsib02b.corvidtec.com2049Y3965
15:27 patrick_g Self-heal Daemon on gfsib02b.corvidtec.comN/AY3970
15:27 ramky_ joined #gluster
15:28 Ethical2ak I got a lot of weird problem with Gluster , sometimes , a simple restart of the service fix everything
15:28 Ethical2ak If you can have a little downtime
15:29 Ethical2ak you could stop and restart the service on all node
15:29 Ethical2ak and see if the your load average is changing
15:29 Ethical2ak And if you have time , try to upgrade to Gluster 3.7
15:30 Ethical2ak we solved couple of our problem with this version
15:30 Ethical2ak lots of fix.
15:32 farhorizon joined #gluster
15:42 patrick_g The trusted.gfid and trusted.glusterfs.volume-id xattrs are set to 0x00...1 and the homegfs volume id and we rebooted the machines one by one and waited for the heals to finish. If this problem reoccurs, we will probably upgrade to 3.7
15:44 shyam joined #gluster
15:50 MrAbaddon joined #gluster
15:52 nangthang joined #gluster
16:00 atinm joined #gluster
16:01 haomaiwang joined #gluster
16:02 muneerse joined #gluster
16:04 bowhunter joined #gluster
16:08 bennyturns joined #gluster
16:12 Norky joined #gluster
16:15 wushudoin joined #gluster
16:21 atalur joined #gluster
16:37 jwaibel joined #gluster
16:38 F2Knight joined #gluster
16:40 davidhadas joined #gluster
16:41 inodb joined #gluster
16:48 shubhendu joined #gluster
16:52 RameshN_ joined #gluster
16:58 RameshN__ joined #gluster
17:01 haomaiwang joined #gluster
17:02 jiffin joined #gluster
17:05 MrAbaddon joined #gluster
17:05 dgandhi joined #gluster
17:06 dgandhi joined #gluster
17:11 rafi joined #gluster
17:15 monotek joined #gluster
17:15 coredump joined #gluster
17:15 chirino joined #gluster
17:19 calavera joined #gluster
17:19 F2Knight_ joined #gluster
17:21 RameshN joined #gluster
17:27 MACscr joined #gluster
17:27 MACscr joined #gluster
17:27 semiautomatic joined #gluster
17:30 jiffin joined #gluster
17:37 dlambrig joined #gluster
17:51 Rapture joined #gluster
17:51 farhorizon joined #gluster
17:54 Rapture joined #gluster
18:00 ira joined #gluster
18:01 haomaiwa_ joined #gluster
18:03 b0p joined #gluster
18:08 ivan_rossi left #gluster
18:08 jwd joined #gluster
18:12 jwang__ joined #gluster
18:13 natarej_ joined #gluster
18:15 curratore_ joined #gluster
18:16 d4n13L_ joined #gluster
18:16 shyam1 joined #gluster
18:16 realcliluw joined #gluster
18:18 F2Knight joined #gluster
18:18 lalatend1M joined #gluster
18:18 Ramereth|home joined #gluster
18:20 n-st_ joined #gluster
18:20 ira_ joined #gluster
18:21 mrrrgn_ joined #gluster
18:21 atrius_ joined #gluster
18:22 ovaistariq joined #gluster
18:22 tswartz joined #gluster
18:22 sjohnsen joined #gluster
18:22 marlinc_ joined #gluster
18:22 crashmag joined #gluster
18:23 nhayashi joined #gluster
18:23 armyriad joined #gluster
18:23 portante joined #gluster
18:23 sadbox joined #gluster
18:24 Vaizki joined #gluster
18:26 klaxa joined #gluster
18:29 adamaN joined #gluster
18:31 skylar joined #gluster
18:33 dlambrig joined #gluster
18:39 corretico joined #gluster
18:39 papamoose1 joined #gluster
18:40 mobaer joined #gluster
18:43 bennyturns joined #gluster
18:45 patrick_g left #gluster
18:52 rafi joined #gluster
18:53 dlambrig joined #gluster
18:54 rafi joined #gluster
18:56 dblack_ joined #gluster
18:57 bfoster joined #gluster
18:57 samikshan joined #gluster
18:57 Humble joined #gluster
18:57 raghu joined #gluster
18:57 hagarth joined #gluster
18:58 rafi joined #gluster
18:58 curratore joined #gluster
18:59 shyam1 joined #gluster
18:59 lalatend1M joined #gluster
18:59 portante joined #gluster
18:59 bennyturns joined #gluster
19:01 21WAAUJC7 joined #gluster
19:03 twaddle joined #gluster
19:04 ovaistariq joined #gluster
19:07 ovaistar_ joined #gluster
19:17 B21956 joined #gluster
19:19 rafi joined #gluster
19:20 jiffin1 joined #gluster
19:43 jmarley joined #gluster
19:45 inodb joined #gluster
19:55 ovaistariq joined #gluster
20:00 bennyturns joined #gluster
20:01 haomaiwa_ joined #gluster
20:07 mhulsman joined #gluster
20:25 timotheus1 joined #gluster
20:27 6A4ABQZC4 hey folks, I'm having some real problems mounting gluster from the server, 3.6 on ubuntu 14.04. Each time I get the error "Mount failed. Please check the log file for more details."
20:27 6A4ABQZC4 and in the logs [2016-01-20 16:50:25.932329] E [glusterfsd-mgmt.c:1574:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
20:27 6A4ABQZC4 [2016-01-20 16:50:25.932350] E [glusterfsd-mgmt.c:1674:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/gluster-storage)
20:28 shyam joined #gluster
20:28 JoeJulian What's your volume name?
20:28 6A4ABQZC4 volume1
20:28 JoeJulian That's why. You're trying to mount a volume named "gluster-storage".
20:29 6A4ABQZC4 looking at gluster volume info I see this:
20:29 6A4ABQZC4 Volume Name: volume1
20:29 6A4ABQZC4 Type: Replicate
20:29 6A4ABQZC4 Volume ID: 746ea687-8d73-4ed5-80bb-7e19277be631
20:29 6A4ABQZC4 Status: Started
20:29 6A4ABQZC4 Number of Bricks: 1 x 2 = 2
20:29 6A4ABQZC4 Transport-type: tcp
20:29 6A4ABQZC4 Bricks:
20:29 6A4ABQZC4 Brick1: pd-wfe1:/gluster-storage
20:29 6A4ABQZC4 Brick2: pd-wfe2:/gluster-storage
20:29 JoeJulian use a pastebin
20:30 6A4ABQZC4 sorry, realised as soon as I did it
20:30 JoeJulian Yep, says the volume name is volume1
20:30 JoeJulian You mount a volume, not a brick.
20:30 JoeJulian @glossary
20:30 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
20:30 6A4ABQZC4 thanks, now I feel like a fool :-)
20:30 JoeJulian No problem. Happens all the time.
20:36 deniszh joined #gluster
20:59 xavih joined #gluster
20:59 malevolent joined #gluster
21:01 haomaiwang joined #gluster
21:02 dlambrig joined #gluster
21:07 mobaer joined #gluster
21:10 calavera joined #gluster
21:13 rafi joined #gluster
21:13 shyam joined #gluster
21:17 jlp1 joined #gluster
21:19 dblack joined #gluster
21:22 chirino_m joined #gluster
21:30 twaddle joined #gluster
21:41 frozengeek joined #gluster
21:47 arcolife joined #gluster
21:59 hagarth joined #gluster
22:01 haomaiwa_ joined #gluster
22:03 ovaistariq joined #gluster
22:20 mpingu joined #gluster
22:31 jmarley joined #gluster
22:31 jmarley joined #gluster
22:37 mpingu joined #gluster
22:38 frozengeek joined #gluster
22:40 frozengeek joined #gluster
22:41 hagarth joined #gluster
22:45 chirino_m joined #gluster
22:50 dlambrig joined #gluster
22:54 curratore joined #gluster
22:56 F2Knight joined #gluster
23:01 haomaiwang joined #gluster
23:02 pdrakeweb joined #gluster
23:02 cvstealt1 joined #gluster
23:03 paratai_ joined #gluster
23:05 saltsa joined #gluster
23:05 necrogami joined #gluster
23:05 plarsen joined #gluster
23:06 shyam joined #gluster
23:06 klaxa joined #gluster
23:06 ovaistariq joined #gluster
23:07 frozengeek joined #gluster
23:16 and` joined #gluster
23:21 cpetersen joined #gluster
23:22 cpetersen I'm wondering if there are any experts in here that can help me.  I'm looking to setup a cheap bit of shared storage for virtual machines.  I am currently down the road of DRBD but need more than two nodes.
23:23 cpetersen Can anyone recommend Gluster as an alternative?  Could it work like DRBD, merging three separate pieces of storage in to one multi-pathed iSCSI share?
23:23 JoeJulian What questions do you have?
23:24 JoeJulian A lot of us use gluster for vm images. I prefer its reliability to drbd.
23:24 cpetersen The problem is that I feel like the DR capabilities of the "backup" third node in DRBD are not what I'm looking for in an HA scenario.
23:24 JoeJulian iscsi though? vmware?
23:24 cpetersen Yes.
23:24 JoeJulian bummer
23:24 cpetersen :(
23:25 JoeJulian nfs is preferred, but sure, you can do iscsi.
23:25 JoeJulian nfs is more fault tolerant.
23:25 cpetersen What about multi-pathing?  Is it susceptible to corruption?
23:26 JoeJulian We haven't had good luck with its reliability.
23:26 JoeJulian We use nfs for vmware and it works great.
23:26 cpetersen iSCSI you mean; or NFS?
23:26 cpetersen Oh ok.
23:26 JoeJulian But don't tell anybody I touch vmware.
23:27 JoeJulian It'll ruin my reputation.
23:27 cpetersen Heh.  :)
23:28 cpetersen I have three servers and two critical applications.  vCenter manages all of them.  I have two SSDs in each server.  One does hypervisor and storage management VM, the other is consumed by that mgmt VM and re-introduces the storage from all three servers as one bit of shared storage.
23:28 cpetersen Would I be able to achieve this with Gluster?
23:28 cpetersen Like I said, the problem with DRBD is that I need more than two nodes.
23:28 JoeJulian Yes.
23:29 JoeJulian In my experience, drbd has more problems than just that. It causes flashbacks.
23:29 cpetersen Flashbacks?
23:29 JoeJulian Sleepless nights of sheer terror.
23:29 cpetersen LOL
23:30 gildub joined #gluster
23:30 cpetersen OK, I was about to google it for a technical term... :P
23:30 JoeJulian I've lost way too much data to drbd to ever try it again.
23:30 cpetersen Can you recommend a tutorial or a distro for Gluster to get it set up in my environment?
23:33 cpetersen Also, how is Gluster for bandwidth as it is a file-level implementation rather than block?
23:33 cpetersen How does it measure up with Ceph?
23:36 JoeJulian Our gluster installation is faster on the same hardware than our ceph one.
23:36 JoeJulian Can usually fill all available bandwidth.
23:37 cpetersen What about with lower bandwidth like say running a single VM off of a replicated store on a single gigabit connection?
23:37 cpetersen I know it's stupid, but is it possible?
23:37 JoeJulian Sure
23:37 JoeJulian I do that at home.
23:37 JoeJulian (just not vmware, of course)
23:37 cpetersen heh
23:38 cpetersen Better not say Hyper-V.
23:38 JoeJulian Lol!
23:38 JoeJulian kvm managed by openstack
23:38 cpetersen Have you heard of Pivot3?
23:38 cpetersen :)
23:39 cpetersen Wow that's weird.  I think I just found your blog if that's you... ha
23:39 JoeJulian It is.
23:40 JoeJulian I need to write something... It's getting stale.
23:40 nathwill joined #gluster
23:41 JoeJulian and no, hadn't heard of pivot3. Nobody's heard of IO either, but we're the 2nd largest privately owned datacenter company in the world.
23:42 cpetersen Alright so what I'm getting is that you would recommend GlusterFS out of anything like DRBD, MooseFS or Ceph for replicated shared storage to be used for VM infrastructure? :)
23:42 cpetersen Nope, never heard of IO.
23:42 JoeJulian Gluster or Ceph, yes.
23:42 cpetersen P3 is terrible.  Not even worth a look.
23:43 cpetersen Is there a distro out there with some tools to build a gluster cluster?  (haha)
23:43 JoeJulian Most of them. It's pretty universally distributed. Give Arch linux a spin. :)
23:45 JoeJulian Actually, now that I've said either one, I take it back for vmware. With ceph you'd have to map the rbd before sharing it as iscsi, or mounting it and sharing it with nfs.
23:46 JoeJulian That means the failover isn't going to be clean.
23:48 cpetersen But using GlusterFS with NFS will be clean?
23:48 cpetersen For vCenter HA?
23:54 cpetersen Why do so many people recommend DRBD for clusters anyway?  If there are so many horror stories??

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary