Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 Telsin vxitch1 ctdb works for me on rhel 7.1, in the base repo
00:38 harish joined #gluster
00:41 gildub joined #gluster
00:55 zhangjn joined #gluster
00:57 EinstCrazy joined #gluster
00:58 zhangjn joined #gluster
01:02 baojg joined #gluster
01:42 Lee1092 joined #gluster
01:45 calisto joined #gluster
01:57 nangthang joined #gluster
02:00 harish joined #gluster
02:07 haomaiwang joined #gluster
02:18 EinstCrazy joined #gluster
02:19 haomaiwa_ joined #gluster
02:44 bharata-rao joined #gluster
02:45 calisto left #gluster
02:48 haomaiwang joined #gluster
02:53 OriginalTim joined #gluster
02:53 OriginalTim Hey #Gluster, I was wondering if someone is around to help me with a replicating gluster.
02:54 OriginalTim I created the gluster with replica 2
02:54 OriginalTim and am currently trying to "increase" the amount of bricks.
02:54 OriginalTim without making a distributed-replica.
02:55 OriginalTim Is this possible at all?
02:59 baojg joined #gluster
03:01 haomaiwa_ joined #gluster
03:04 OriginalTim Currently I've just added the extra bricks to the replica volume. It automatically changed it to a distributed.
03:06 OriginalTim The other option would be to make another replica of the replica. But this is an extra point of failure. And you would then need to re-mount the clients to the servers.
03:10 OriginalTim Or do people only use bricks in this scenario to increase storage space and increase iops
03:17 jcastill1 joined #gluster
03:22 jcastillo joined #gluster
03:22 EinstCrazy joined #gluster
03:24 nishanth joined #gluster
03:26 baojg joined #gluster
03:37 gem joined #gluster
03:41 shubhendu joined #gluster
03:42 atinm joined #gluster
03:43 nangthang joined #gluster
03:44 nishanth joined #gluster
03:45 dgbaley joined #gluster
03:47 akay joined #gluster
03:56 TheSeven joined #gluster
03:57 akay hi guys, can someone confirm something for me... when i mount a gluster client to the volume using samba, it connects only to the server i specify - however when i use fuse mount the client opens connections to all nodes?
04:01 haomaiwa_ joined #gluster
04:02 itisravi joined #gluster
04:21 sakshi joined #gluster
04:22 RameshN joined #gluster
04:24 yazhini joined #gluster
04:28 nbalacha joined #gluster
04:28 ppai joined #gluster
04:31 onorua joined #gluster
04:37 vimal joined #gluster
04:38 kshlm joined #gluster
04:40 overclk joined #gluster
04:51 kshlm joined #gluster
04:54 ndarshan joined #gluster
04:55 pppp joined #gluster
04:57 alghost joined #gluster
05:01 haomaiwa_ joined #gluster
05:05 shubhendu joined #gluster
05:14 skoduri joined #gluster
05:15 baojg joined #gluster
05:17 hagarth joined #gluster
05:18 atalur_ joined #gluster
05:20 mash333 joined #gluster
05:22 Manikandan joined #gluster
05:23 Bhaskarakiran joined #gluster
05:27 mash333 joined #gluster
05:28 neha joined #gluster
05:28 ashiq joined #gluster
05:28 deepakcs joined #gluster
05:29 kdhananjay joined #gluster
05:29 rafi joined #gluster
05:33 kanagaraj joined #gluster
05:41 kotreshhr joined #gluster
05:52 R0ok_ joined #gluster
06:01 ctria joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 hgowtham joined #gluster
06:08 rjoseph joined #gluster
06:09 hagarth joined #gluster
06:12 arcolife joined #gluster
06:13 doekia joined #gluster
06:15 atalur_ joined #gluster
06:35 OriginalTim Anyone used an arbiter volume in their replication before?
06:36 [Enrico] joined #gluster
06:41 raghu joined #gluster
06:43 Pupeno joined #gluster
06:45 hagarth joined #gluster
06:47 LebedevRI joined #gluster
06:48 Pupeno joined #gluster
06:59 vmallika joined #gluster
07:03 deniszh joined #gluster
07:09 haomaiwa_ joined #gluster
07:10 nangthang joined #gluster
07:15 suliba joined #gluster
07:16 Philambdo joined #gluster
07:17 jiffin joined #gluster
07:21 [Enrico] joined #gluster
07:21 jcastill1 joined #gluster
07:25 Pupeno joined #gluster
07:26 Bhaskarakiran joined #gluster
07:26 jcastillo joined #gluster
07:28 anil joined #gluster
07:54 kshlm joined #gluster
07:56 yazhini joined #gluster
07:57 jockek joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 maveric_amitc_ joined #gluster
08:15 jcastill1 joined #gluster
08:17 Slashman joined #gluster
08:18 SOLDIERz joined #gluster
08:20 arcolife joined #gluster
08:20 jcastillo joined #gluster
08:23 Akee joined #gluster
08:42 karnan joined #gluster
08:49 onorua joined #gluster
08:52 Slashman joined #gluster
08:56 mhulsman joined #gluster
08:57 jcastill1 joined #gluster
09:01 jwd joined #gluster
09:02 jcastillo joined #gluster
09:07 jcastillo joined #gluster
09:08 fyxim joined #gluster
09:14 samsaffron___ joined #gluster
09:17 billputer joined #gluster
09:19 atinm joined #gluster
09:20 ocramuias joined #gluster
09:20 ocramuias Hello
09:20 glusterbot ocramuias: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:20 haomaiwa_ joined #gluster
09:21 haomaiwang joined #gluster
09:21 ocramuias I have a gluster volume with 40 nodes and 40 bricks i create Distribuited only! If possible change to Distribuited Replicated ? Thank's
09:21 Saravana_ joined #gluster
09:23 haomaiwang joined #gluster
09:24 hagarth ocramuias: do you already have data created in this volume?
09:24 ocramuias yes :(
09:25 ocramuias use gluster for storage of mail services
09:26 yazhini joined #gluster
09:27 xavih joined #gluster
09:30 hagarth ocramuias: do you have enough storage space to create additional bricks?
09:31 ocramuias sure
09:31 ocramuias have 40 bricks 1 per server 2 volumes
09:31 ocramuias total spache used is 4%
09:32 hagarth what is the amount of data stored at the moment?
09:32 ocramuias mmm i need check
09:33 ocramuias 3.8T  121G  3.5T   4% /var/qmail
09:35 hagarth It might be a good idea to create a new distributed-replicated volume and rsync data from the old volume to new. once done, you can cleanup the old volume.
09:38 Schatzi joined #gluster
09:39 Schatzi hi @all
09:40 hagarth hello
09:40 glusterbot hagarth: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:42 muneerse joined #gluster
09:43 Schatzi has anybody some experience with hosting kvm virtual machines on a replicated gluster volume
09:44 hagarth Schatzi: quite a few users use gluster in that fashion
09:45 Schatzi i want to build a setup for testing but i am not quite sure about the optimal hardware to buy
09:46 ocramuias vps  Schatzi
09:47 ocramuias hagarth i not know if with this solution i can lost some data
09:47 hagarth Schatzi: a mail on gluster-users might fetch you some hardware recommendations
09:48 ocramuias i need rsync only data or data and gluster indexes ?
09:48 hagarth ocramuias: recommend rsyncing from the gluster mount point
09:48 hagarth ocramuias: having a downtime for a final rsync should ensure that there is no data loss
09:49 hagarth ocramuias: the other option would be to convert a distributed volume to a distributed replicated volume and let self-heal synchronize data. given the amount of self-healing that needs to be done, it might affect your ongoing data traffic too.
09:51 ocramuias bhua the traffic not is a problem gluster work via lan connection 1 gbps
09:53 ocramuias have tutorial for convert volume type ?
09:53 Schatzi my dell servers have a quad nic setup on which i can do a software bonding an get a good speed. but ddr infiniband adapters are listed with a realy cheap price on ebay... perhaps a dedicated infiniband connection only for gluster makes more sense?
09:54 ocramuias schatzi dell servers have virtualization ?
09:55 ocramuias openvz / virtuozzo ?
09:56 Schatzi yes i want them to act as proxmox hosts
09:56 hagarth ocramuias: need to run for a bit, but you can use the add-brick command to do the conversion. itisravi any recommendations here?
09:56 ocramuias proxmox is openvz based
09:57 Schatzi you can build openvz and kvm machines within proxmox
09:57 ocramuias i have some gluster client with openvz / virtuozzo and bonding
09:57 ocramuias not is a problem
10:00 * itisravi looks at the logs above.
10:00 ramky joined #gluster
10:00 Schatzi i am not sure about placing vps data with thousands auf small files for every vps on the gluster volume? some threads in the forum look like kvm data with only one big image file is performing much better? i do not know :-)
10:01 haomaiwang joined #gluster
10:02 hgichon joined #gluster
10:03 ocramuias Schatzi not understand! What's the type of Volume you think use?
10:03 ocramuias How many servers you think use?
10:04 itisravi ocramuias: add brick+ running heal full is the right way, but the heal might take a lot of time since you  have 40 bricks to be replicated.
10:04 Schatzi i want to start with 3 servers and placing a replica on each server
10:04 itisravi ocramuias: and it can consume a lot of cpu cycles too.
10:04 ocramuias itisravi exist documentation ?
10:05 ocramuias schatzi replica 3 and how many gb or tb ?
10:06 baojg joined #gluster
10:06 Schatzi so i have to build a replicated volume with replica count 3 if i am correct?
10:06 Schatzi yes :-)
10:06 itisravi ocramuias: http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Expanding_Volumes is quite dated but should work.
10:06 ocramuias for example i create 40 serves and choos small disk per any servers (100 Gb)
10:08 Schatzi i have a lot of 2 tb nl-sas disks
10:08 ocramuias thank's itisravi i check docs
10:08 ocramuias you create 3 servers with 2 tb per server ?
10:09 Schatzi which i can use dedicated for gluster bricks
10:09 ocramuias replica 2 or 3 ?
10:10 Schatzi my final state will be 3 servers with 4x 2tb each doing replica 3
10:10 ocramuias you can schatzi
10:10 Schatzi or is this a bad idea?
10:10 ocramuias 4 hdd ?
10:10 ocramuias 4 bricks ?
10:10 Schatzi yes
10:11 itisravi ocramuias: The doc suggests to run a rebalance after adding the bricks. If that doesn't work to replicate the data. do a 'gluster volume heal <volname> full` after adding the new bricks.
10:11 Schatzi 3 servers with 4x 2tb bricks each
10:11 ocramuias but for me the big problem with gluster is not know where replicas are stored
10:12 ocramuias because for this i choose use only 1 bricks per servers and create 40 servers (with small hdd)
10:12 ocramuias the bricks in you case are real hdd ?
10:12 atinm joined #gluster
10:12 Schatzi yes they are real hdd
10:13 ocramuias okay is good
10:13 Schatzi i read about gluster is replicating in the order on which the bricks has been added?
10:14 Schatzi lets say i have srv01-03 and brick01-04 on each server
10:16 Schatzi if i start the volume with one brick on each server and replica 3, i think gluster will place a replica on each server :-)
10:16 Schatzi then i will add a second brick on each server
10:17 zhangjn joined #gluster
10:17 Schatzi but you are right with doing only one brick on each server
10:18 Schatzi perhaps it is the better way
10:20 Schatzi what network performance do you have on your machines?
10:20 Schatzi 40 servers sounds awesome :-)
10:20 ocramuias i use vmware with 10 nodes
10:21 ocramuias and 4 storage in iscsi
10:21 ocramuias 2 switch iscsi for data
10:21 ocramuias 4 switch for lan and wan 1 Gb port
10:26 zhangjn joined #gluster
10:26 mhulsman joined #gluster
10:26 Schatzi and your gluster volume is performing better on smaller or bigger files?
10:27 ocramuias i use gluster for store emails
10:27 ocramuias not have bigger files
10:27 Schatzi oh okay :-)
10:27 ocramuias for emails is good
10:28 EinstCrazy joined #gluster
10:29 social joined #gluster
10:29 Schatzi i am asking because i already built a virtual setup for testing with 3 gluster nodes and 1 brick each
10:30 Schatzi and i realized that there is no big difference in performance in writing 1mb x 1000 times or 1000mb x 1 time
10:31 Bhaskarakiran_ joined #gluster
10:33 ocramuias in prosmox how to work ha for hdd ?
10:34 ocramuias for example in vmware in some case i need move in other lun the virtual server
10:34 ocramuias and if the vps have big hdd have more time for performe the move
10:34 ocramuias for this i choose have more virtual servers with small hdd
10:35 zhangjn joined #gluster
10:35 ocramuias but this is another discussion
10:36 Schatzi in proxmox the config for each vm is stored on all cluster nodes and i put the vm data on the gluster volume as shared storage
10:37 Schatzi so if one hardware node us going down, the vm is directly started on another node
10:37 Schatzi if all is correcty configured :-)
10:37 ocramuias okay
10:38 Schatzi sorry for my bad english or spelling errors, i am from germany ;-)
10:40 ocramuias i from italy (same problem)
10:40 ocramuias :P
10:40 Schatzi hehe okay ;-)
10:41 Bhaskarakiran joined #gluster
10:42 Schatzi so i will give it a try with multiple bricks on each server and have look if the data is replicated in the correct order
10:43 ocramuias whitout  virtualization direct in bare metal ?
10:43 Schatzi and if not, i will build a raid 5 on each node to have only one bigger brick on each node
10:44 Schatzi yes direct on the bare server...
10:45 ocramuias ehmmm raid 5 replicate , glusterfs replicate !
10:45 ocramuias how many replicas ?
10:45 ocramuias in this i think is good not create any raid and replicate only via gluster
10:46 Schatzi yes because of that i want to build it without raid first :-)
10:47 Schatzi i only have these 3 servers and i have to look how to get as much space as possible without loosing data replication
10:49 ocramuias any raid for me
10:49 ocramuias in 1 disk os system , in 2 ,3 and 4  bricks
10:50 Schatzi with 3 servers having 4x 2tb bricks each and replica count of 3 it results in round about 4x 2tb of total space
10:50 atinm joined #gluster
10:50 Schatzi i think i will put the os on a little flash storage on each server
10:51 ocramuias is good
10:51 ocramuias 4 x 2 = 8 x 3 = 24 tb / 2  = 12 tb or 24 / 3 = 8 tb
10:51 Schatzi so i am not loosing a full hdd tray only for a few megabyte with os
10:52 Schatzi yes correct, 12 tb with replica 2 and 8 tb with replica 3
10:52 Schatzi if am calculating correclty :-)
10:57 Schatzi but in the gluster docs the advise is to have a brick a count as a multiple replica count
10:58 ocramuias oksy 4 is multiple count of 2 ;)
10:58 ashiq joined #gluster
10:58 mash333 joined #gluster
10:59 Schatzi so i think when having 3 servers each with 1, 2, 3 or 4 bricks the better replica count is 3
11:00 ocramuias what's it the type of your files ?
11:00 Schatzi kvm data files
11:00 ocramuias server snapshot ?
11:00 Schatzi vm disk images
11:00 ocramuias server backup ?
11:01 18WAAQRSB joined #gluster
11:01 Schatzi no the real data files the virtual machines are working on
11:01 ocramuias real ?
11:02 ocramuias not sure is good idea ;)
11:02 Schatzi no good idea? :-)
11:02 ocramuias mmm no for me no
11:03 Schatzi why not? you think it is not fast enough?
11:03 ocramuias disk images need hardware replication and tested solutions not know if any people in the room try it
11:03 ocramuias i need go afk for 2 hours see you later Schatzi
11:03 [Enrico] joined #gluster
11:04 Schatzi ok see you later
11:04 ocramuias Enrico italiano ?
11:04 ocramuias bye
11:12 arcolife joined #gluster
11:13 mhulsman joined #gluster
11:19 rjoseph joined #gluster
11:20 gildub joined #gluster
11:22 mash333- joined #gluster
11:36 rjoseph joined #gluster
11:40 ashiq joined #gluster
11:41 firemanxbr joined #gluster
11:51 ctria joined #gluster
11:54 chirino joined #gluster
11:59 itisravi_ joined #gluster
12:05 Mr_Psmith joined #gluster
12:05 itisravi joined #gluster
12:08 David-Varghese joined #gluster
12:09 Schatzi joined #gluster
12:11 itisravi_ joined #gluster
12:12 kotreshhr left #gluster
12:14 mhulsman1 joined #gluster
12:15 mhulsman joined #gluster
12:16 jrm16020 joined #gluster
12:21 mhulsman1 joined #gluster
12:22 ocramuias joined #gluster
12:22 ocramuias hi @all
12:22 Schatzi hi ocramuias
12:25 arcolife joined #gluster
12:28 mhulsman joined #gluster
12:32 mhulsman1 joined #gluster
12:34 harish joined #gluster
12:42 shaunm joined #gluster
12:45 ctria joined #gluster
12:54 jcastill1 joined #gluster
12:55 unclemarc joined #gluster
12:59 jcastillo joined #gluster
13:00 overclk joined #gluster
13:02 mpietersen joined #gluster
13:03 mpietersen joined #gluster
13:05 rwheeler joined #gluster
13:09 julim joined #gluster
13:17 shyam joined #gluster
13:23 haomaiwa_ joined #gluster
13:35 harold joined #gluster
13:35 enzob joined #gluster
13:38 onorua joined #gluster
13:42 dgbaley joined #gluster
13:47 kdhananjay joined #gluster
13:48 DV_ joined #gluster
13:51 Lee1092 joined #gluster
13:52 bluenemo joined #gluster
13:52 marlinc_ joined #gluster
13:53 dgbaley joined #gluster
13:55 dlambrig left #gluster
13:55 GB21 joined #gluster
13:57 DV joined #gluster
13:59 calisto joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 muneerse2 joined #gluster
14:05 overclk joined #gluster
14:07 zhangjn joined #gluster
14:14 overclk joined #gluster
14:14 _maserati joined #gluster
14:19 zhangjn joined #gluster
14:21 hagarth @channelstats
14:21 glusterbot hagarth: On #gluster there have been 409673 messages, containing 15473798 characters, 2546434 words, 9165 smileys, and 1306 frowns; 1812 of those messages were ACTIONs.  There have been 189757 joins, 4698 parts, 185425 quits, 29 kicks, 2379 mode changes, and 8 topic changes.  There are currently 269 users and the channel has peaked at 299 users.
14:21 amye joined #gluster
14:25 DV joined #gluster
14:26 poornimag joined #gluster
14:29 [Enrico] joined #gluster
14:31 mpietersen joined #gluster
14:34 bluenemo lol. good to know :D
14:39 spcmastertim joined #gluster
14:45 mpietersen joined #gluster
14:45 _Bryan_ joined #gluster
14:51 zhangjn joined #gluster
14:52 harish joined #gluster
14:54 ocramuias1 joined #gluster
14:56 johnmark joined #gluster
14:57 EinstCrazy joined #gluster
14:58 Bhaskarakiran joined #gluster
14:59 DV_ joined #gluster
15:00 a_ta joined #gluster
15:01 calisto joined #gluster
15:01 haomaiwa_ joined #gluster
15:06 zhangjn joined #gluster
15:07 zhangjn joined #gluster
15:09 zhangjn joined #gluster
15:10 ndevos hi amye, maybe join #gluster-dev too? some things you would like to do/track/... maybe? https://botbot.me/freenode/gluster-dev/2015-09-21/?msg=50180805&amp;page=2
15:10 glusterbot Title: IRC Logs for #gluster-dev | BotBot.me [o__o] (at botbot.me)
15:10 zhangjn joined #gluster
15:12 zhangjn joined #gluster
15:13 marlinc joined #gluster
15:15 overclk joined #gluster
15:24 mhulsman joined #gluster
15:24 wushudoin joined #gluster
15:25 jrm16020 joined #gluster
15:25 ocramuias joined #gluster
15:26 mhulsman1 joined #gluster
15:29 mhulsman joined #gluster
15:32 mhulsman1 joined #gluster
15:35 nangthang joined #gluster
15:40 EinstCrazy joined #gluster
15:45 cholcombe joined #gluster
15:53 hchiramm_home joined #gluster
15:53 zhangjn joined #gluster
15:55 haomaiwa_ joined #gluster
16:00 rafi joined #gluster
16:01 haomaiwa_ joined #gluster
16:02 JoeJulian bug 1181779
16:03 amye ndevos: sorry, you caught me in a meeting. I'm actually just going to set up a screen so that I'm always in here. (and there.) :)
16:14 suliba joined #gluster
16:16 overclk joined #gluster
16:19 CU-Paul joined #gluster
16:23 calavera joined #gluster
16:26 jcastill1 joined #gluster
16:27 suliba joined #gluster
16:29 skoduri joined #gluster
16:31 jcastillo joined #gluster
16:33 deniszh1 joined #gluster
16:35 rafi joined #gluster
16:38 Rapture joined #gluster
16:42 jiffin joined #gluster
16:47 epoch joined #gluster
16:50 CyrilPeponnet Hi guys, I have one question regarding Root squashing doesn't happen for clients in trusted storage pool from https://github.com/gluster/glusterfs/blob/release-3.6/doc/release-notes/3.6.0.md#minor-improvements
16:50 glusterbot Title: glusterfs/3.6.0.md at release-3.6 · gluster/glusterfs · GitHub (at github.com)
16:50 CyrilPeponnet what does this mean ?
16:51 CyrilPeponnet Can I mount a volume from one node from the cluster and access it without the root-squashing ?
16:55 hagarth joined #gluster
16:56 JoeJulian CyrilPeponnet: "Provide one more option for mounting which actually says root-squash
16:56 JoeJulian should/should not happen. This value is given priority only for the trusted
16:56 JoeJulian clients. For non trusted clients, the volume option takes the priority. But
16:56 JoeJulian for trusted clients if root-squash should not happen, then they have to be
16:56 JoeJulian mounted with root-squash=no option. (This is done because by default
16:56 JoeJulian blocking root-squashing for the trusted clients will cause problems for smb
16:56 JoeJulian and UFO clients for which the requests have to be squashed if the option is
16:56 JoeJulian enabled)."
16:56 JoeJulian (gah, should have removed the newlines)
16:57 JoeJulian Found that at http://review.gluster.org/#/c/4863/
16:57 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:57 CyrilPeponnet oh so I can mount from a trusted peer a volume and set not root-squash as client options and this will work !
16:57 CyrilPeponnet Good to know :)
16:58 CyrilPeponnet Thanks @JoeJulian you made my day easier :)
16:59 haomaiwa_ joined #gluster
17:00 GB21_ joined #gluster
17:02 haomaiwa_ joined #gluster
17:03 haomaiwa_ joined #gluster
17:04 hchiramm_home joined #gluster
17:05 Intensity joined #gluster
17:13 overclk joined #gluster
17:18 amye joined #gluster
17:20 shaunm joined #gluster
17:27 coredump joined #gluster
17:27 poornimag joined #gluster
17:29 rwheeler joined #gluster
17:31 overclk joined #gluster
17:32 coredump joined #gluster
17:45 kanagaraj joined #gluster
17:46 jiffin joined #gluster
17:47 poornimag joined #gluster
17:52 enzob joined #gluster
17:54 mpietersen joined #gluster
18:13 cristov_mac joined #gluster
18:13 cristov_mac Hi there~
18:14 cristov_mac anyone hear me ?
18:16 mhulsman joined #gluster
18:24 mpietersen joined #gluster
18:28 skoduri joined #gluster
18:30 Akee joined #gluster
18:32 JoeJulian hello
18:32 glusterbot JoeJulian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:32 JoeJulian cristov_mac: ^
18:34 togdon joined #gluster
18:37 skoduri_ joined #gluster
18:40 kotreshhr joined #gluster
18:48 mhulsman joined #gluster
19:02 skoduri__ joined #gluster
19:05 Pupeno joined #gluster
19:13 skoduri_ joined #gluster
19:14 skoduri joined #gluster
19:16 mpietersen joined #gluster
19:25 dgbaley joined #gluster
19:26 shaunm joined #gluster
19:43 _maserati hai!
19:43 _maserati I beat the system.
20:05 mhulsman joined #gluster
20:10 DavidVargese joined #gluster
20:16 shaunm joined #gluster
20:20 Rydekull joined #gluster
20:42 Mr_Psmith joined #gluster
20:43 coreping joined #gluster
20:43 edong23 joined #gluster
21:01 edong23 joined #gluster
21:01 jobewan joined #gluster
21:10 DV joined #gluster
21:12 [7] joined #gluster
21:45 Pupeno joined #gluster
22:00 ChrisNBlum joined #gluster
22:01 amye joined #gluster
22:08 DV joined #gluster
22:09 shyam joined #gluster
22:19 srsc left #gluster
22:22 gildub joined #gluster
22:44 johnweir joined #gluster
22:50 smokinggun joined #gluster
22:53 bennyturns joined #gluster
22:54 smokinggun greetings - any best practices for ensuring a client mounts, when the client is started before the servers are ready? (ie all machines have been rebooted). All servers are Unbuntu 15.04
23:07 skoduri joined #gluster
23:10 ctria joined #gluster
23:11 Mr_Psmith joined #gluster
23:17 ctria joined #gluster
23:25 skoduri joined #gluster
23:51 nangthang joined #gluster
23:54 zhangjn joined #gluster
23:54 zhangjn joined #gluster
23:55 zhangjn joined #gluster
23:56 zhangjn joined #gluster
23:56 gildub joined #gluster
23:59 EinstCrazy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary