Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:45 edwardm61 joined #gluster
01:18 Pupeno joined #gluster
01:45 soumya joined #gluster
01:50 chirino joined #gluster
01:52 chirino joined #gluster
01:55 kdhananjay joined #gluster
02:06 harish joined #gluster
02:13 coredump joined #gluster
02:14 ttkg joined #gluster
02:16 LebedevRI joined #gluster
02:24 rjoseph joined #gluster
02:43 rjoseph joined #gluster
02:48 Peppaq joined #gluster
02:57 vimal joined #gluster
03:01 bharata-rao joined #gluster
03:04 wushudoin joined #gluster
03:06 coredump joined #gluster
03:07 Pupeno joined #gluster
03:14 gildub joined #gluster
03:15 shubhendu joined #gluster
03:25 soumya joined #gluster
03:30 Pupeno joined #gluster
03:45 Manikandan joined #gluster
03:45 Manikandan_ joined #gluster
03:49 uebera|| joined #gluster
03:49 raghug joined #gluster
03:54 itisravi joined #gluster
03:54 itisravi joined #gluster
03:55 atinmu joined #gluster
03:58 kanagaraj joined #gluster
03:59 nbalacha joined #gluster
04:00 kumar joined #gluster
04:00 lalatenduM joined #gluster
04:05 overclk joined #gluster
04:11 wkf joined #gluster
04:13 rjoseph joined #gluster
04:14 shubhendu joined #gluster
04:19 RameshN joined #gluster
04:20 anoopcs joined #gluster
04:21 jiffin joined #gluster
04:25 sakshi joined #gluster
04:26 wkf joined #gluster
04:28 vikumar joined #gluster
04:29 hagarth joined #gluster
04:30 kotreshhr joined #gluster
04:36 smohan joined #gluster
04:36 smohan joined #gluster
04:38 rjoseph joined #gluster
04:38 raghug joined #gluster
04:39 ndarshan joined #gluster
04:40 dusmant joined #gluster
04:40 shubhendu joined #gluster
04:45 anoopcs joined #gluster
04:47 maveric_amitc_ joined #gluster
04:48 nbalacha joined #gluster
04:49 poornimag joined #gluster
04:56 anoopcs joined #gluster
05:04 rafi joined #gluster
05:08 meghanam joined #gluster
05:12 gem joined #gluster
05:16 pppp joined #gluster
05:19 SOLDIERz joined #gluster
05:22 Apeksha joined #gluster
05:26 free_amitc_ joined #gluster
05:29 Anjana joined #gluster
05:29 glusterbot News from newglusterbugs: [Bug 1216310] Disable rpc throttling for glusterfs protocol <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216310>
05:29 jermudgeon left #gluster
05:29 maveric_amitc_ joined #gluster
05:29 jermudgeon joined #gluster
05:29 Bhaskarakiran joined #gluster
05:35 gem joined #gluster
05:50 gem joined #gluster
05:50 kdhananjay joined #gluster
05:55 Anjana joined #gluster
05:59 liquidat joined #gluster
06:05 nishanth joined #gluster
06:07 ppai joined #gluster
06:09 coredump joined #gluster
06:09 pcaruana joined #gluster
06:10 spandit joined #gluster
06:13 kshlm joined #gluster
06:15 anil joined #gluster
06:16 ghenry joined #gluster
06:16 ghenry joined #gluster
06:18 soumya joined #gluster
06:23 nbalacha joined #gluster
06:24 jtux joined #gluster
06:24 rjoseph joined #gluster
06:28 raghu joined #gluster
06:29 xavih joined #gluster
06:29 malevolent joined #gluster
06:33 Guest19140 joined #gluster
06:36 uxbod Good morning, all
06:36 uxbod MASTER/SLAVE geo-rep is now working, but is MASTER/MASTER possible ?
06:37 mmbash joined #gluster
06:37 msvbhat uxbod: No, That is not supported now
06:37 msvbhat uxbod: By supported, I mean not possible :)
06:37 uxbod :(
06:38 uxbod have a 5ms latency between the two sites so wonder if I could get away with mounting across the link
06:40 hagarth joined #gluster
06:42 cholcombe joined #gluster
06:44 uxbod that appears to work quite well ... so will leave geo-rep in place for full site failure and x-mount for ACTIVE/ACTIVE access :)
06:52 msvbhat uxbod: You mean mounting slave from master and syncing data to it?
07:15 o5k_ joined #gluster
07:17 kovshenin joined #gluster
07:18 o5k joined #gluster
07:18 SOLDIERz joined #gluster
07:18 shubhendu joined #gluster
07:20 gem joined #gluster
07:23 Anjana joined #gluster
07:25 uxbod @msvbhat: am mounting a volume, from master, across a link
07:26 uxbod MASTER geo-reps to SLAVE, so in case of full site failure can repoint to the SLAVE
07:26 hflai joined #gluster
07:26 cholcombe joined #gluster
07:27 jtux joined #gluster
07:28 [Enrico] joined #gluster
07:29 glusterbot News from newglusterbugs: [Bug 1093692] Resource/Memory leak issues reported by Coverity. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1093692>
07:29 glusterbot News from newglusterbugs: [Bug 1216898] Data Tiering: Volume inconsistency errors getting logged when attaching uneven(odd) number of hot bricks in hot tier(pure distribute tier layer) to a dist-rep volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216898>
07:30 msvbhat uxbod: Yeah, You can (should) redirect all your I/O to SLAVE when MASTER site fails.
07:32 uxbod then once MASTER site comes back we can re-sync data back ... should provide pretty good resilience with limited downtime
07:34 fsimonce joined #gluster
07:38 Philambdo joined #gluster
07:38 anrao joined #gluster
07:40 msvbhat There must be some doc somewhere out there about failover failback
07:40 msvbhat uxbod: https://github.com/gluster/glusterfs/​blob/release-3.5/doc/admin-guide/en-U​S/markdown/admin_geo-replication.md
07:42 uxbod thank you
07:44 kovshenin joined #gluster
07:45 hflai joined #gluster
07:48 R0ok_ joined #gluster
07:52 Slashman joined #gluster
07:53 shubhendu joined #gluster
07:57 Pupeno joined #gluster
08:01 al joined #gluster
08:12 the-me joined #gluster
08:16 uxbod joined #gluster
08:26 deniszh joined #gluster
08:33 ktosiek joined #gluster
08:34 dusmant joined #gluster
08:34 SOLDIERz joined #gluster
08:35 [Enrico] joined #gluster
08:43 DV joined #gluster
08:55 Anjana joined #gluster
08:58 meghanam joined #gluster
08:59 soumya joined #gluster
08:59 deepakcs joined #gluster
09:00 dusmant joined #gluster
09:05 cholcombe joined #gluster
09:07 anrao joined #gluster
09:14 Norky joined #gluster
09:22 ctria joined #gluster
09:27 meghanam joined #gluster
09:30 glusterbot News from newglusterbugs: [Bug 1216940] Disperse volume: glusterfs crashed while testing heal <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216940>
09:31 SOLDIERz joined #gluster
09:32 sac anrao
09:33 anrao sac: yes
09:33 rafi1 joined #gluster
09:36 rwheeler joined #gluster
09:38 soumya joined #gluster
09:44 jmarley joined #gluster
09:45 hagarth joined #gluster
09:47 karnan joined #gluster
09:47 poornimag joined #gluster
09:51 rjoseph joined #gluster
09:51 nbalacha joined #gluster
09:55 SOLDIERz joined #gluster
09:56 harish joined #gluster
10:01 jiku joined #gluster
10:13 jcastill1 joined #gluster
10:18 ira joined #gluster
10:19 jcastillo joined #gluster
10:21 karnan joined #gluster
10:23 akamensky joined #gluster
10:30 glusterbot News from newglusterbugs: [Bug 1216965] (glusterfs-3.6.4) GlusterFS 3.6.4 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216965>
10:30 glusterbot News from newglusterbugs: [Bug 1216960] data tiering: do not allow tiering related volume set options on a regular volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216960>
10:31 soumya joined #gluster
10:34 sac anrao, I pinged to call you to the demo.
10:42 nishanth joined #gluster
10:49 shubhendu joined #gluster
10:55 gem_ joined #gluster
11:00 glusterbot News from newglusterbugs: [Bug 1216976] Data Tiering:do not allow detach-tier when the volume is in "stopped" status <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216976>
11:03 SOLDIERz joined #gluster
11:07 gildub joined #gluster
11:15 rafi joined #gluster
11:20 _Bryan_ joined #gluster
11:20 firemanxbr joined #gluster
11:26 nhayashi joined #gluster
11:26 here_and_there joined #gluster
11:26 here_and_there hello
11:26 glusterbot here_and_there: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
11:26 glusterbot answer.
11:27 here_and_there woot, nice reception.
11:28 here_and_there just wanted to ask whether deleting .glusterfs directory is safe when rebuilding a volume (data didn't change, just wanted to rebuild volume and cluster after system and gluster upgrade)
11:32 lpabon joined #gluster
11:34 shubhendu joined #gluster
11:44 itisravi joined #gluster
11:47 jdarcy joined #gluster
11:48 rjoseph joined #gluster
11:49 ndarshan joined #gluster
11:50 RameshN joined #gluster
11:51 Gill joined #gluster
11:56 cholcombe joined #gluster
11:56 malevolent joined #gluster
11:56 xavih joined #gluster
11:58 kkeithley joined #gluster
12:00 JustinClift *** REMINDER *** The Weekly Gluster Community meeting is starting NOW in #gluster-meeting on Freenode ***
12:03 anoopcs joined #gluster
12:13 rjoseph joined #gluster
12:20 harish_ joined #gluster
12:28 gem__ joined #gluster
12:30 glusterbot News from newglusterbugs: [Bug 1206429] Maintainin local transaction peer list in op-sm framework <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206429>
12:38 lalatenduM joined #gluster
12:48 Anjana joined #gluster
12:48 Leildin joined #gluster
12:51 SOLDIERz joined #gluster
12:52 Leildin JoeJulian, quick question about rebalancing and fragmentation on a volume.
12:53 harish_ joined #gluster
12:55 Leildin is there a notion of fragmentation like in a normal disk on a gluster volume ?
12:59 jcastill1 joined #gluster
13:00 anrao_ joined #gluster
13:03 dusmant joined #gluster
13:03 tom[] joined #gluster
13:03 halfinhalfout joined #gluster
13:05 hagarth joined #gluster
13:05 jcastillo joined #gluster
13:05 julim joined #gluster
13:06 chirino joined #gluster
13:08 julim joined #gluster
13:09 Bhaskarakiran joined #gluster
13:09 rafi1 joined #gluster
13:10 halfinhalfout joined #gluster
13:13 rafi joined #gluster
13:15 anrao joined #gluster
13:16 bene2 joined #gluster
13:16 hamiller joined #gluster
13:18 kdhananjay joined #gluster
13:22 theron joined #gluster
13:27 fsimonce joined #gluster
13:29 jmarley joined #gluster
13:29 Leildin joined #gluster
13:30 rjoseph joined #gluster
13:32 ir2ivps5 joined #gluster
13:32 halfinhalfout joined #gluster
13:33 plarsen joined #gluster
13:33 klaxa|work joined #gluster
13:33 soumya joined #gluster
13:34 Anjana1 joined #gluster
13:35 wkf joined #gluster
13:36 ira joined #gluster
13:36 spandit joined #gluster
13:38 B21956 joined #gluster
13:39 lalatenduM joined #gluster
13:43 xavih joined #gluster
13:43 malevolent joined #gluster
13:46 dusmant joined #gluster
13:49 dbruhn joined #gluster
13:56 ilbot3 joined #gluster
13:56 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
13:59 anrao_ joined #gluster
13:59 anrao joined #gluster
14:00 firemanxbr joined #gluster
14:04 georgeh-LT2 joined #gluster
14:05 meghanam joined #gluster
14:06 ira joined #gluster
14:07 halfinhalfout1 joined #gluster
14:10 haomaiwa_ joined #gluster
14:11 nbalacha joined #gluster
14:14 rshade98 @ppa
14:14 glusterbot rshade98: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
14:15 SOLDIERz joined #gluster
14:16 coredump joined #gluster
14:27 SOLDIERz joined #gluster
14:31 julim joined #gluster
14:37 JoeJulian Leildin: no, but the filesystem on the underlying brick may.
14:38 JoeJulian here_and_there: It should be safe to delete the .glusterfs directory to rebuild a volume.
14:41 RameshN joined #gluster
14:42 roost__ joined #gluster
14:42 shubhendu joined #gluster
14:45 Slashman joined #gluster
14:51 ghenry joined #gluster
15:02 hagarth joined #gluster
15:03 ashiq joined #gluster
15:10 ctria joined #gluster
15:13 coredump joined #gluster
15:24 itisravi joined #gluster
15:28 kdhananjay joined #gluster
15:28 soumya joined #gluster
15:36 theron joined #gluster
15:42 Prilly joined #gluster
15:43 ktosiek joined #gluster
15:59 uxbod joined #gluster
16:01 glusterbot News from newglusterbugs: [Bug 1217135] readdir-ahead needs to be enabled by default for new volumes on gluster-3.7 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217135>
16:04 Manikandan joined #gluster
16:07 Salcoder joined #gluster
16:07 Salcoder Hey guys, I have setup a replication gluster configuration but when I write files in one host the changes aren't being replicated.
16:08 KDDLB joined #gluster
16:08 KDDLB left #gluster
16:08 Salcoder I was wondering if there's a set amount of time that needs to wait or if the changes are distributed automatically to the next probe?
16:08 Salcoder Just like in a MySQL master-master replication.
16:09 Salcoder I'm using a local filesystem directory (a mounted harddrive solution will impact performance based on our configuration) that's why we are using a local folder.
16:10 Salcoder Thanks in advance.
16:14 itisravi Salcoder: You need to mount the volume and write to it from there.
16:15 Salcoder I did mount the volume.
16:16 Salcoder You mean by doing this right: gluster volume create name_volume replica 2 transport tcp server1.foobar.com:/mnt/data server2.foobar.com:/mnt/data
16:16 JoeJulian As long as the client is connected to all the bricks, writes will be synchronous.
16:16 JoeJulian No, that's creating a volume.
16:16 Salcoder Then I start the volume and check the status to make sure the volume status has started.
16:17 Salcoder When I check with gluster volume info it shows Brick: gluster0:/mnt/gluster and Brick2: gluster1:/mnt/gluster
16:17 julim joined #gluster
16:17 JoeJulian https://github.com/gluster/glusterfs​/blob/master/doc/admin-guide/en-US/m​arkdown/admin_settingup_clients.md
16:17 RameshN joined #gluster
16:17 Salcoder My question is, since I'm replicating a local directory in the filesystem, do I need to do some other configuration (like mounting) or can I just specify the directory?
16:17 Salcoder Also, regarding permissions, should the directory by own by root or some other username like gluster for example?
16:18 itisravi Salcoder: https://github.com/gluster/glusterfs/blo​b/master/doc/admin-guide/en-US/markdown/​admin_settingup_clients.md#manual-mount
16:19 Salcoder So, even though I have setup the replication... I have to also setup the clients on both servers?
16:19 Salcoder This is where I'm getting confused.
16:19 Salcoder Both servers have glusterfs-server installed.
16:20 Salcoder So, in order to access the information from both web servers, I need to mount the /mnt/gluster on both servers as clients?
16:21 itisravi In order to access the volume, you have to mount it. The mount can be anywhere- on the servers that contain the bricks or a completely different machine.
16:22 itisravi So yes, If you want to access the vol. from both servers, you would have to mount it on both.
16:23 JoeJulian GlusterFS... the FS stands for filesystem. It's a clustered filesystem, just like xfs or ext4 is a filesystem. You can't just write blocks of data to the middle of your block device and expect xfs to know what to do with it. The same is true for glusterfs. You've created a volume, now you must mount a client to use that volume.
16:23 Salcoder Let me read a little bit more then.
16:24 Salcoder Got it. Thanks a lot guys. Let me read a little bit more about it.
16:27 xiu joined #gluster
16:35 dddh joined #gluster
16:40 zerick joined #gluster
16:41 firemanxbr joined #gluster
16:44 ktosiek joined #gluster
16:57 Salcoder Hey guys... Thanks so much I was able to make it work. Now, I have one question though.
16:58 Salcoder Lets say the volume is create at /gluster-volume and the mount point is /storage-pool
16:59 Salcoder If I write a file in /gluster-volume I notice that the files doesn't get replicated. Although, when I create a file in /storage-pool then the file gets replicated.
16:59 Salcoder It pretty much means that I can't write files directly to the volume and instead I have to write to the mounted directory only?
16:59 theron_ joined #gluster
17:00 coredump joined #gluster
17:00 halfinhalfout1 Salcoder: that's correct
17:01 JoeJulian That's correct. Otherwise you're bypassing all the logic and it's the equivalent to: echo "Hello!" | dd of=/dev/sda1 seek=23578256
17:02 Salcoder Outstanding.
17:02 Salcoder Thanks a lot for the information! Extremely helpful indeed.
17:16 Rapture joined #gluster
17:17 Salcoder Quick concern, when the data is transferred between peers, is that data sent in clear or is it encrypted?
17:18 Salcoder I was reading something about being sent over sshfs but I'm not completely sure.
17:19 JoeJulian https://github.com/gluster/glusterfs/blob/mast​er/doc/admin-guide/en-US/markdown/admin_ssl.md
17:27 jobewan joined #gluster
17:28 RameshN joined #gluster
17:28 Salcoder @JoeJulian: It pretty much means that I need to generate certificates on both peers and switch them across both peers. Now, it talks about identities. The certificate will have to be named 'zaphod.pem' and 'zaphod.key' just as an example, correct? Does that mean that I shouldn't configure the auth.allow option then?
17:29 Salcoder Or whatever the name of the identity is, then that's the name of the certificate?
17:29 JoeJulian I don't see a problem with using it as well.
17:30 JoeJulian Could save from a ddos, I suppose.
17:31 JoeJulian Though I personally feel that's better handled in iptables.
17:31 JoeJulian Or a real firewall.
17:31 Arminder joined #gluster
17:32 glusterbot News from newglusterbugs: [Bug 1206587] Replace contrib/uuid by a libglusterfs wrapper that uses the uuid implementation from the OS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206587>
17:34 Salcoder That's also true. Even though the servers are facing the Internet, I guess I could talk to my provider and give private addressing to my app servers and manage everything like you said with a firewall.
17:35 Salcoder At least that way, information is not traveling over the Internet.
17:38 theron joined #gluster
17:45 jcastill1 joined #gluster
17:49 rbazen joined #gluster
17:50 jcastillo joined #gluster
17:51 rafi joined #gluster
18:11 tom[] joined #gluster
18:17 uxbod @Salcoder: IPSec not an option ?
18:17 pppp joined #gluster
18:19 ira joined #gluster
18:20 JoeJulian Not a lot of difference between ipsec and tls as far as function unless you're offloading ipsec onto its own hardware. My concern about syn flooding the gluster ports is just as bad as udp flooding isakmp.
18:21 JoeJulian That's where your firewall comes in to play, dropping traffic that doesn't come from a trusted address.
18:21 side_control joined #gluster
18:22 o5k_ joined #gluster
18:24 o5k__ joined #gluster
18:29 Salcoder uxbod: I could look into that.
18:30 Salcoder JoeJulian: My concern would be for people sniffing the network and capturing packets.
18:32 JoeJulian Right, tls or ipsec prevents that. My concern is having a service that's open to the internet leaves open the potential for a-holes to be a-holes.
18:33 Salcoder Haha... Well said.
18:37 schwing joined #gluster
18:43 schwing i'm doing a fix-layout rebalance of my volume and i've been watching the log file and it appears as if the rebalance has restarted at the top and is processing a lot of the same directories again.  it looked like it was rebalancing the directories in alpha-numeric order, but i could be wrong.  is there any reason why a rebalance would loop like this?
18:43 lalatenduM joined #gluster
18:45 schwing i've disconnected the only client that was connected so there is no new writes to the volume
18:46 schwing running gluster 3.5.2 and hoping to upgrade to 3.6.3 once the volume finishes it's rebalance
18:47 B21956 left #gluster
18:48 schwing also, i see a lot of informational messages in the logs like ... [2015-04-29 18:46:45.608622] I [dht-layout.c:754:dht_layout_dir_mismatch] 0-gv0-dht: subvol: gv0-replicate-1; inode layout - 3988183914 - 4294967295; disk layout - 2147483646 - 2454267023
18:48 JoeJulian schwing: I'm pretty certain it's not in a sorted order.
18:48 B21956 joined #gluster
18:48 JoeJulian layout mismatches should be fixed by the fix-layout
18:49 schwing almost every directory is spitting out log entries like this.  should there be this many?
18:49 schwing hi, JoeJulian.  thanks for taking a minute to answer.
18:53 schwing do these log lines indicate that the volume is fragmented?
19:01 firemanxbr hey guys, how-to remove "Other names" in my peers ?
19:01 firemanxbr I don't remove my peers, but this error present in my env:
19:02 firemanxbr http://ur1.ca/k9otx
19:05 deniszh joined #gluster
19:12 shaunm_ joined #gluster
19:38 javi404 joined #gluster
19:41 itspete joined #gluster
19:41 javi404_ joined #gluster
19:46 itspete I've got one 1.6TB Gluster volume set up with geo-replication to another Gluster volume in another datacentre.  About 1.1TB synced before it all but stopped.  Is there a way to force it to complete the sync? (The docs recommend erasing the index but don't mention how one might actually do that.)
19:56 javi404_ joined #gluster
19:56 bennyturns joined #gluster
20:05 nsoffer joined #gluster
20:11 halfinhalfout1 itspete: this worked for me. 1) stop geo-rep for the vol 2) delete the geo-rep session 3) 'gluster volume set indexing off' for that vol
20:11 halfinhalfout1 4) re-create the geo-rep session 5) start geo-rep session
20:12 itspete halfinhalfout1: Thanks!  I'll give that a shot.
20:13 halfinhalfout1 syntax for turning off indexing: "gluster volume set <volume_name> geo-replication.indexing off"
20:13 iainhallam joined #gluster
20:14 halfinhalfout1 oh, and you'll need to re-create the geo-rep session w/ "create force" .. otherwise it warns you there is already data there and won't create the session
20:18 badone__ joined #gluster
20:20 iainhallam I've got a server with 8 4TB drives and I'm looking at getting more to put into a converged oVirt/Gluster environment, serving both VMs and file storage.
20:20 iainhallam Question is, should I set up RAIDs on the servers or just make each disk a brick?
20:20 iainhallam (There are separate system disks.)
20:21 chirino_m joined #gluster
20:25 B21956 joined #gluster
20:27 DV joined #gluster
20:35 JoeJulian schwing, no, not fragmented. See my article on dht misses for details about how dht is used with gluster.
20:35 JoeJulian @lucky dht misses are expensive
20:35 glusterbot JoeJulian: https://joejulian.name/blog​/dht-misses-are-expensive/
20:35 JoeJulian schwing: ^
20:36 JoeJulian Thanks for fielding that one halfinhalfout1.
20:36 halfinhalfout1 np
20:37 JoeJulian iainhallam: raid should be used at the brick level to ensure your storage can keep up with your network. Fault-tolerance should, imho, be handled with replication.
20:38 iainhallam OK. I was looking at disperse volumes to get maximum space usage with resilience.
20:38 iainhallam If I increase the size of an array housing a brick, would Gluster be able to use that increased space?
20:39 JoeJulian yes
20:39 iainhallam As the same brick, but now of a larger size?
20:39 JoeJulian I just haven't proven disperse in production, so I have no opinion on that yet.
20:39 iainhallam :) Fair enough.
20:39 JoeJulian correct
20:40 iainhallam Would it matter if different servers had different array sizes for a disperse volume, do you know?
20:40 iainhallam (And therefore brick sizes, I suppose.)
20:41 JoeJulian I'm pretty sure you still want similar brick sizes.
20:42 iainhallam Right. Just thinking of future expansion.
20:42 iainhallam Thanks for the info.
20:42 JoeJulian weighting is still being devloped.
20:42 iainhallam I saw a mention of that in the docs I read on the Wiki.
20:42 cholcombe joined #gluster
20:43 iainhallam Suggested 3.7 if it wasn't in 3.6, but I guess that's not likely.
20:43 JoeJulian it may be.
20:43 JoeJulian I'll know a lot more next month when I go to Barcelona.
20:44 iainhallam A conference?
20:44 JoeJulian Gluster Summit
20:46 iainhallam I'll be interested to hear what happens!
20:49 schwing JoeJulian: thanks for that link.  i've actually read that one from your blog.  hopefully that's what the fix-layout rebalance will remedy
20:51 JoeJulian It should.
21:06 Pupeno_ joined #gluster
21:07 kovshenin joined #gluster
21:25 o5k_ joined #gluster
21:28 o5k joined #gluster
21:33 lexi2 joined #gluster
21:41 o5k_ joined #gluster
21:44 o5k__ joined #gluster
21:44 Arminder- joined #gluster
21:56 eightyeight joined #gluster
21:57 eightyeight i have just upgraded by debian servers to debian 8 with glusterfs 3.5.2
21:57 eightyeight however, i had to re-probe the peers, as they went missing
21:57 eightyeight and no volumes are present
21:58 eightyeight previously, i was on gluster 3.2. how can i get my volumes back?
21:58 JoeJulian Wow, that's a big jump.
21:58 eightyeight indeed
21:59 JoeJulian iirc, 3.2 debs used /etc/glusterd for state. That's since been move to /var/lib/glusterd
21:59 JoeJulian You should be able to mv it.
21:59 Pupeno joined #gluster
21:59 eightyeight then just reload the daemon?
21:59 JoeJulian You'll then need to run an update command.... one sec while I find it.
22:00 eightyeight ok
22:00 JoeJulian glusterd --xlator-option *.upgrade=on -N
22:02 eightyeight ah. i see the PID from jan 28 still running. which makes sense, with the VM images running on top of it
22:04 eightyeight ugh. this is going to suck, and i'm really not in the mood right now
22:04 * eightyeight tables for tomorrow
22:05 JoeJulian Yeah, sorry, that's a downtime upgrade.
22:06 eightyeight yeah
22:12 kovsheni_ joined #gluster
22:19 nsoffer joined #gluster
22:23 Pupeno joined #gluster
23:18 plarsen joined #gluster
23:33 gildub joined #gluster
23:51 nick9871 joined #gluster
23:57 nick9871 Having trouble setting up GlusterFS + CTDB. File locking only works when clients map to the same samba share. Have followed GlusterFS set up and CTDB set up instructions but still can't get it to work. Is it expected that all clients will access the same physical samba share? (and glusterFS takes care of replication)
23:59 jhc76 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary