Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 pdrakeweb joined #gluster
00:43 plarsen joined #gluster
00:54 msmith joined #gluster
01:19 haomaiwa_ joined #gluster
01:27 lyang0 joined #gluster
01:27 hexa- joined #gluster
01:28 hexa- Hello
01:28 glusterbot hexa-: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:29 ccha3 joined #gluster
01:45 msmith joined #gluster
01:45 vincent_vdk joined #gluster
01:45 plarsen joined #gluster
01:46 msmith joined #gluster
01:49 PaulCuzner joined #gluster
02:23 wkf joined #gluster
02:24 dusmant joined #gluster
02:31 nangthang joined #gluster
02:35 wdennis joined #gluster
02:54 meghanam joined #gluster
02:59 poornimag joined #gluster
03:07 kdhananjay joined #gluster
03:23 maveric_amitc_ joined #gluster
03:28 kanagaraj joined #gluster
03:31 gildub_ joined #gluster
03:32 gildub_ joined #gluster
03:35 raghug joined #gluster
03:36 gildub joined #gluster
03:42 itisravi joined #gluster
03:47 overclk joined #gluster
03:50 atinmu joined #gluster
04:02 kumar joined #gluster
04:08 hagarth joined #gluster
04:17 kshlm joined #gluster
04:22 anoopcs joined #gluster
04:23 wkf joined #gluster
04:23 Guest47283 joined #gluster
04:25 jcastill1 joined #gluster
04:27 nbalacha joined #gluster
04:30 jcastillo joined #gluster
04:32 soumya joined #gluster
04:35 jiffin joined #gluster
04:36 badone_ joined #gluster
04:48 ndarshan joined #gluster
04:53 sakshi joined #gluster
04:57 jiku joined #gluster
04:58 schandra joined #gluster
04:59 soumya joined #gluster
05:00 ppai joined #gluster
05:01 gem__ joined #gluster
05:05 kotreshhr joined #gluster
05:08 Bhaskarakiran joined #gluster
05:09 kdhananjay joined #gluster
05:10 deepakcs joined #gluster
05:10 rafi joined #gluster
05:11 maveric_amitc_ joined #gluster
05:12 ashiq joined #gluster
05:12 wushudoin joined #gluster
05:13 gem__ joined #gluster
05:14 hagarth joined #gluster
05:24 raghu joined #gluster
05:27 Manikandan joined #gluster
05:27 Manikandan_ joined #gluster
05:27 anil joined #gluster
05:33 gem joined #gluster
05:36 spandit joined #gluster
05:38 pppp joined #gluster
05:39 SOLDIERz joined #gluster
05:44 bharata-rao joined #gluster
05:46 hagarth joined #gluster
05:48 ashiq joined #gluster
05:50 Manikandan joined #gluster
05:57 glusterbot News from newglusterbugs: [Bug 1214563] [FEAT] Trash translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214563>
06:00 DV joined #gluster
06:04 ppai_ joined #gluster
06:06 jvandewege_ joined #gluster
06:07 hgowtham joined #gluster
06:09 lifeofguenter joined #gluster
06:09 Manikandan joined #gluster
06:14 nishanth joined #gluster
06:14 jtux joined #gluster
06:20 liquidat joined #gluster
06:20 lalatenduM joined #gluster
06:24 DV joined #gluster
06:25 hchiramm joined #gluster
06:29 glusterbot News from resolvedglusterbugs: [Bug 1210182] Brick start fails, if source is compiled with disable-tiering. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210182>
06:29 glusterbot News from resolvedglusterbugs: [Bug 1206553] Data Tiering:Need to allow detaching of cold tier too <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206553>
06:40 harish_ joined #gluster
06:44 javi404 joined #gluster
06:44 atalur joined #gluster
06:46 karnan joined #gluster
06:49 karnan_ joined #gluster
06:49 kshlm joined #gluster
06:54 haomaiwa_ joined #gluster
07:07 [Enrico] joined #gluster
07:11 AlexKey joined #gluster
07:15 AlexKey Setting up HA gluster is it safe to use IP failover if gluster volume is mounted using glusterfs-client (not CIFS)? E.g. I have 2 bricks on two separate VMs and one VM has address 192.168.1.100 another 192.168.1.101, then using heartbeat or some other tool setup IP failover on address 192.168.1.99 and mount this address. Is this setup safe to use or should I be aware of something?
07:24 lifeofguenter joined #gluster
07:26 lifeofguenter joined #gluster
07:28 akamensky joined #gluster
07:29 akamensky Setting up HA gluster is it safe to use IP failover if gluster volume is mounted using glusterfs-client (not CIFS)? E.g. I have 2 bricks on two separate VMs and one VM has address 192.168.1.100 another 192.168.1.101, then using heartbeat or some other tool setup IP failover on address 192.168.1.99 and mount this address. Is this setup safe to use or should I be aware of something?
07:29 AlexKey left #gluster
07:32 ghenry joined #gluster
07:32 ghenry joined #gluster
07:36 fsimonce joined #gluster
07:36 kdhananjay joined #gluster
07:38 Prilly joined #gluster
07:40 ricky-ticky1 joined #gluster
07:48 Prilly joined #gluster
07:53 partner akamensky: if you have replica 2 volume on your gluster then its kind of HA already, especially when using native glusterfs client, it can withstand losing neither of gluster servers
07:54 partner but maybe you should open up a bit your thoughts, i'm not sure for what exactly you are planning to use heartbeat and ip failover
07:55 ppai_ joined #gluster
07:56 partner quickly googling for some image to explain the setup and behaviour on fault situation, found this: https://support.dce.felk.cvut.cz/mediawi​ki/images/thumb/5/52/glusterfs_replicate​d.svg/887px-glusterfs_replicated.svg.png
07:57 akamensky partner: Imagine that we have 2 VMs running replica 2 volume. Each VM has its own IP address, so on the client when I mount I use use this IP address, now if out of this 2 VMs one goes down for whatever reason, the data will still be accessible on second VM, but in order for my app on App VM to access it I should mount it using another IP address.
07:58 partner first of all, use the dns. secondly, gluster only needs to get the volume info from any server, it will then know where the bricks are
07:59 partner even if you go with the ip addresses it will work
08:01 akamensky partner: then maybe I misunderstood how I should correctly mount it. Because right now I mount it from fstab using "192.168.1.100:volume /path/to/mount glusterfs defaults 0 2"
08:02 partner akamensky: that mount look ok thought you should add the backupvolfile parameter pointing to your another VM
08:02 akamensky So by what I understand then my setup is already HA and if I manually take down 192.158.1.100 then mounted partition still will be available?
08:02 partner how the mount happens is that the line actually just instructs the glusterfs client to go look for the volume info from that address. once it gets the volume info it will include the details of where exactly the bricks are and connects there straight
08:03 partner umm yes
08:03 ktosiek joined #gluster
08:03 partner BUT, if you would attempt to mount with that line (and without the backupvolfile option) your mount would fail in case that box would be offline at that very moment
08:03 partner all the gluster servers in peer know details of all the volumes so reaching any of them is enough
08:05 akamensky Got it, so in mount it reports that it is mounted on specific IP, but in fact underlying glusterfs (on client) takes care of HA by default based on peers available in volume info. Correct?
08:06 partner yes
08:06 partner "The server specified in the mount command is only used to fetch the glusterFS configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount). "
08:08 partner that from RH manual, its really used only for fetching the vol info. but as that one single box can be down at mount time you should at this to your fstab entry options: backupvolfile-server=server name
08:08 partner "name of the backup volfile server to mount the client. If this option is added while mounting fuse client, when the first volfile server fails, then the server specified in backupvolfile-server option is used as volfile server to mount the client. "
08:09 partner ie put VM 2 address there
08:09 hgowtham joined #gluster
08:11 akamensky partner: Awesome, thanks for the info. I think in this case actually Heartbeat could be used to ensure that on mount time client will try to reach machine that is available (by failing over IP address) and later on it will be irrelevant. I will try to play with this kind of setup on test system.
08:11 akamensky Also could be great if someone who takes care of Getting Started on gluster.org could add this information there, otherwise it is not very clear (at least it wasn't for me)
08:13 partner IMO it only complicates otherwise a very simple setup
08:14 harish_ joined #gluster
08:16 partner i use dns round robin to figure out what gluster servers are around and reaching any of them will give the client the needed info
08:17 partner i have not used for example /etc/hosts file but i guess it should work too to reach both your servers
08:17 aravindavk joined #gluster
08:18 Slashman joined #gluster
08:20 partner http://funwithlinux.net/2013/02/g​lusterfs-tips-and-tricks-centos/ - read the "Tip:  Use host names when you configure you cluster and peers." explaining why you should use names instead of ip addresses
08:22 T0aD joined #gluster
08:27 partner it actually seems there is not a word about mounting on the getting started guide, it ends when the volume is up and running :o
08:27 akamensky Perhaps, but I am reluctant to use hosts file for this as it can be easily overlooked by another admin not familiar with this setup
08:27 akamensky exactly :D and no (even simplistic) explanation on how things work
08:28 partner https://access.redhat.com/documentation/en​-US/Red_Hat_Storage/2.0/html/Administratio​n_Guide/chap-Administration_Guide-GlusterF​S_Client.html#sect-Administration_Guide-Gl​usterFS_Client-GlusterFS_Client-Native
08:28 partner there's the explanations
08:28 partner seriously, use the dns
08:31 akamensky Why do I need to put extra service to production system? I am happy to use just gluster as it does what it suppose to, but also adding DNS to production seems like a serious overkill and one more system to take care of.
08:33 ctria joined #gluster
08:36 partner its really up to you, i didn't say you need to put it. it just makes life so much easier. for me the dns is pretty much equal to ip addresses, both needs to exist :)
08:36 partner but don't put that heartbeat thingy either, its not needed and it would be extra service to your production as well
08:37 marbu joined #gluster
08:39 Norky joined #gluster
08:40 vikumar joined #gluster
08:40 partner hmm can't even think how to set that up exactly, the virtual ip would need to be part of the gluster peer in order to be able to serve volume info.. and if one would move the vip from faulted server to a functional one that should confuse gluster nicely.. hmm, need to try it out :)
08:40 lifeofguenter joined #gluster
08:40 akamensky hearbeat isn't separate service, it is super lightweight addition to linux networking. as simple as that. it runs on the machines that are in HA setup and makes those machines check on each other + assigning an IP alias to one of them. if machine that has IP alias at the moment goes down (not responding from network), then another one takes this alias over. so it doesn't add any special service as DNS is. And I am actually surprised tha
08:40 akamensky t Gluster does not have it out-of-box as there is already layer of communication between all nodes.
08:42 partner its client side issue really
08:42 partner and gluster does have all the tools available to make clients get the info from the servers
08:43 partner i don't see a single reason why one would even need to move virtual ip around for gluster
08:44 akamensky I get it, gluster is HA already _after_ it is mounted, but there are situations that due to load system may spawn another app vm that on boot will need to mount this volume or system is rebooting, so mount time is also quite essential I believe.
08:45 partner i provided you with the solution already earlier
08:46 partner please read the mounting part and see into various options you should definately add to your fstab entry as earlier suggested: https://access.redhat.com/documentation/en-U​S/Red_Hat_Storage/2.0/html/Administration_Gu​ide/chap-Administration_Guide-GlusterFS_Clie​nt.html#sect-Administration_Guide-GlusterFS_​Client-GlusterFS_Client-Mounting_Volumes
08:46 akamensky Right, and it is nice to have a solution :) just my guts are not very happy to have DNS running on mission critical servers
08:47 partner those will work without dns
08:49 partner your fstab entry would be something like: 192.168.1.100:/yourvolume /your/mountpoint glusterfs _netdev,backupvolfile-server=1​92.168.1.101,fetch-attempts=10 0 0
08:49 partner or something such
08:50 DV joined #gluster
08:52 nishanth joined #gluster
08:53 nangthang joined #gluster
08:58 ppai_ joined #gluster
09:03 akamensky yes, I get it, just saying that in my opinion it would be nice addition to gluster-server functionality
09:05 partner http://blog.gluster.org/category/mount-glusterfs/
09:14 mbukatov joined #gluster
09:14 nbalacha joined #gluster
09:26 dusmant joined #gluster
09:28 glusterbot News from newglusterbugs: [Bug 1214629] RFE: Remove disperse-data option in the EC volume creation command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214629>
09:32 atalur joined #gluster
09:36 kovshenin joined #gluster
09:38 nbalacha joined #gluster
09:41 kovsheni_ joined #gluster
09:43 kovshenin joined #gluster
09:44 hagarth joined #gluster
09:47 kovshenin joined #gluster
09:53 kovshenin joined #gluster
09:55 kovshenin joined #gluster
09:57 hgowtham joined #gluster
09:58 glusterbot News from newglusterbugs: [Bug 1214644] Upcall: Migrate state during rebalance/tiering <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214644>
09:59 kovshenin joined #gluster
10:01 ira_ joined #gluster
10:02 kovshenin joined #gluster
10:05 ira_ joined #gluster
10:07 kovsheni_ joined #gluster
10:09 kovsheni_ joined #gluster
10:14 kdhananjay joined #gluster
10:14 kovshenin joined #gluster
10:17 gildub joined #gluster
10:17 sakshi joined #gluster
10:20 kovshenin joined #gluster
10:22 kotreshhr1 joined #gluster
10:23 kovsheni_ joined #gluster
10:25 kovshenin joined #gluster
10:26 kkeithley1 joined #gluster
10:26 hgowtham joined #gluster
10:27 lifeofguenter joined #gluster
10:27 jiffin joined #gluster
10:28 glusterbot News from newglusterbugs: [Bug 1214649] BitRot :- File should be removed from signing queue  if reopened for writing in 120 second <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214649>
10:28 glusterbot News from newglusterbugs: [Bug 1214654] Self-heal: Migrate lease_locks as part of self-heal process <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214654>
10:28 glusterbot News from newglusterbugs: [Bug 1214662] File is not evecting from the signing process even thogh file is modified <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214662>
10:32 kovsheni_ joined #gluster
10:33 atalur joined #gluster
10:35 kovshenin joined #gluster
10:35 lifeofgu_ joined #gluster
10:37 lifeofg__ joined #gluster
10:37 kovshenin joined #gluster
10:38 haomaiwang joined #gluster
10:39 lifeofguenter joined #gluster
10:39 harish_ joined #gluster
10:45 Leildin joined #gluster
10:45 kovsheni_ joined #gluster
10:48 kovshenin joined #gluster
10:50 kovsheni_ joined #gluster
10:53 kovshenin joined #gluster
10:55 kovshen__ joined #gluster
10:56 Peppard joined #gluster
10:58 glusterbot News from newglusterbugs: [Bug 1214666] Data Tiering:command prompt hangs when fetching quota list of a tiered volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214666>
11:00 kovshenin joined #gluster
11:03 kovshenin joined #gluster
11:04 ashiq joined #gluster
11:04 ashiq joined #gluster
11:06 kovsheni_ joined #gluster
11:06 firemanxbr joined #gluster
11:08 anrao joined #gluster
11:10 kovshenin joined #gluster
11:12 XpineX joined #gluster
11:18 kovsheni_ joined #gluster
11:22 kovshenin joined #gluster
11:24 ashiq joined #gluster
11:28 glusterbot News from newglusterbugs: [Bug 1214671] Diagnosis and recommended fix to be added in glusterd-messages.h <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214671>
11:28 glusterbot News from newglusterbugs: [Bug 1214677] Failed to issue method call while upgrading to 3.7.0 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214677>
11:30 kovshenin joined #gluster
11:33 kovshenin joined #gluster
11:38 kovsheni_ joined #gluster
11:42 deniszh joined #gluster
11:44 LebedevRI joined #gluster
11:47 fsimonce joined #gluster
11:49 jiffin1 joined #gluster
12:00 rafi1 joined #gluster
12:02 anoopcs joined #gluster
12:03 fsimonce joined #gluster
12:03 meghanam joined #gluster
12:03 soumya joined #gluster
12:06 kanagaraj joined #gluster
12:08 itisravi joined #gluster
12:09 kanagaraj joined #gluster
12:10 DV joined #gluster
12:15 anrao joined #gluster
12:17 theron joined #gluster
12:22 poornimag joined #gluster
12:23 fsimonce joined #gluster
12:24 khanku joined #gluster
12:26 jiffin joined #gluster
12:27 kovshenin joined #gluster
12:27 Bhaskarakiran joined #gluster
12:28 klaxa|work joined #gluster
12:28 glusterbot News from newglusterbugs: [Bug 1194546] Write behind returns success for a write irrespective of a conflicting lock held by another application <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194546>
12:31 soumya joined #gluster
12:32 kotreshhr joined #gluster
12:46 plarsen joined #gluster
12:56 Gill_ joined #gluster
12:59 rafi joined #gluster
12:59 plarsen joined #gluster
13:02 shaunm_ joined #gluster
13:03 rjoseph joined #gluster
13:03 bennyturns joined #gluster
13:10 soumya joined #gluster
13:13 poornimag joined #gluster
13:15 dgandhi joined #gluster
13:16 atalur joined #gluster
13:16 dgandhi joined #gluster
13:17 dgandhi joined #gluster
13:18 dgandhi joined #gluster
13:20 dgandhi joined #gluster
13:21 dgandhi joined #gluster
13:21 dgandhi joined #gluster
13:22 ashiq joined #gluster
13:23 dgandhi joined #gluster
13:23 hgowtham joined #gluster
13:24 kdhananjay joined #gluster
13:24 dgandhi joined #gluster
13:25 dgandhi joined #gluster
13:25 georgeh-LT2 joined #gluster
13:26 dgandhi joined #gluster
13:26 theron_ joined #gluster
13:26 dgandhi joined #gluster
13:27 B21956 joined #gluster
13:27 dgandhi joined #gluster
13:30 dgandhi joined #gluster
13:30 Pupeno joined #gluster
13:30 dgandhi joined #gluster
13:31 dgandhi joined #gluster
13:32 dgandhi joined #gluster
13:33 dgandhi joined #gluster
13:33 dgandhi joined #gluster
13:34 dgandhi joined #gluster
13:35 dgandhi joined #gluster
13:36 dgandhi joined #gluster
13:37 dgandhi joined #gluster
13:38 hexa- I have set up a volume on 3.6.2 and want to add a brick now that also runs 3.6.2
13:38 hexa- and I get volume add-brick: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.
13:38 glusterbot hexa-: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
13:38 hexa- how do I choose a reasonable op version?
13:39 dgandhi joined #gluster
13:40 dgandhi joined #gluster
13:40 ZakWolfinger joined #gluster
13:41 dgandhi joined #gluster
13:42 ZakWolfinger Gluster 3.5.3 servers and client, volume is a replica 2.  When tar'ing a directory tree from the client I'm getting random "File removed before we read it" and if I try to copy the tree I get random "No such file or directory".  Looking on the servers, the files exist.  Volume heal gv0 info shows nothing needing healed.  Any thoughts?
13:43 dgandhi joined #gluster
13:44 dgandhi joined #gluster
13:44 maZtah joined #gluster
13:45 dgandhi joined #gluster
13:46 poornimag joined #gluster
13:47 dgandhi joined #gluster
13:48 kdhananjay joined #gluster
13:48 jmarley joined #gluster
13:48 hamiller joined #gluster
13:49 dgandhi joined #gluster
13:49 dgandhi joined #gluster
13:50 schwing joined #gluster
13:51 hgowtham joined #gluster
13:51 dgandhi joined #gluster
13:52 dgandhi joined #gluster
13:54 dgandhi joined #gluster
13:55 dgandhi joined #gluster
13:56 dgandhi joined #gluster
13:57 msciciel joined #gluster
13:57 jcastill1 joined #gluster
13:57 dgandhi joined #gluster
13:58 dgandhi joined #gluster
13:59 glusterbot News from newglusterbugs: [Bug 1214772] gluster xml empty output volume status detail <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214772>
13:59 dgandhi joined #gluster
14:00 dgandhi joined #gluster
14:00 rjoseph joined #gluster
14:01 dgandhi joined #gluster
14:02 dgandhi joined #gluster
14:02 Pupeno joined #gluster
14:02 marbu joined #gluster
14:02 dgandhi joined #gluster
14:04 dgandhi joined #gluster
14:05 dgandhi joined #gluster
14:06 dgandhi joined #gluster
14:06 jobewan joined #gluster
14:08 dgandhi joined #gluster
14:09 ashiq joined #gluster
14:09 dgandhi joined #gluster
14:10 plarsen joined #gluster
14:11 dgandhi joined #gluster
14:11 dgandhi joined #gluster
14:11 yossarianuk joined #gluster
14:13 dgandhi joined #gluster
14:14 dgandhi joined #gluster
14:15 jcastillo joined #gluster
14:15 dgandhi joined #gluster
14:16 dgandhi joined #gluster
14:18 dgandhi joined #gluster
14:18 ashiq- joined #gluster
14:19 dgandhi joined #gluster
14:19 Manikandan joined #gluster
14:19 Manikandan_ joined #gluster
14:20 yossarianuk hi - I have a existing 2 server replicated Glusterfs vol
14:20 dgandhi joined #gluster
14:20 yossarianuk is there a sensible way of changing this to a master->slave geo-replicated volume ?
14:21 dgandhi joined #gluster
14:22 haomaiwang joined #gluster
14:22 dgandhi joined #gluster
14:24 dgandhi joined #gluster
14:25 dgandhi joined #gluster
14:26 dgandhi joined #gluster
14:27 dgandhi joined #gluster
14:27 dgandhi joined #gluster
14:28 dgandhi joined #gluster
14:28 jobewan joined #gluster
14:29 dgandhi joined #gluster
14:30 dgandhi joined #gluster
14:31 bennyturns joined #gluster
14:31 yossarianuk i.e do I have to delete the existing replicated volume and recreate using https://github.com/gluster/glusterfs​/blob/master/doc/admin-guide/en-US/m​arkdown/admin_distributed_geo_rep.md
14:31 atinmu joined #gluster
14:31 yossarianuk Or can I convert an existing replicated volume?
14:31 dgandhi joined #gluster
14:32 dgandhi joined #gluster
14:33 wushudoin joined #gluster
14:34 dgandhi joined #gluster
14:35 kdhananjay joined #gluster
14:35 dgandhi joined #gluster
14:36 kotreshhr left #gluster
14:36 haomaiwang joined #gluster
14:37 hexa- how can I find out which op-version all my nodes support?
14:38 dgandhi joined #gluster
14:38 dgandhi joined #gluster
14:39 dgandhi joined #gluster
14:40 haomaiwa_ joined #gluster
14:41 dgandhi joined #gluster
14:42 dgandhi joined #gluster
14:44 dgandhi joined #gluster
14:44 ZakWolfinger joined #gluster
14:45 dgandhi joined #gluster
14:46 rjoseph joined #gluster
14:47 dgandhi joined #gluster
14:48 dgandhi joined #gluster
14:48 roost joined #gluster
14:49 dgandhi joined #gluster
14:50 dgandhi joined #gluster
14:51 haomaiwa_ joined #gluster
14:52 dgandhi joined #gluster
14:52 schwing i installed gluster 3.5.2 with the ubuntu PPA by semiosis.  i want to upgrade to 3.6.x but it looks like it is a different PPA.  do i need to remove the semiosis PPA and then add the 3.6.x PPA?  will that affect my current installation at all?
14:53 dgandhi joined #gluster
14:54 ZakWolfinger schwing:  You've got it right.  it won't affect the current installation until you install the new version.
14:54 dgandhi joined #gluster
14:55 dgandhi joined #gluster
14:57 dgandhi joined #gluster
14:58 dgandhi joined #gluster
14:58 schwing it looks like to upgrade that i need to unmount any existing connections, stop the gluster volume, stop the gluster services, replace the PPA, install the new packages, then start the gluster services.  does that sound right?  did i miss anything?
14:59 dgandhi joined #gluster
15:00 cholcombe joined #gluster
15:01 dgandhi joined #gluster
15:03 dgandhi joined #gluster
15:03 mbukatov joined #gluster
15:04 ZakWolfinger looks right to me.
15:07 bene2 joined #gluster
15:09 Bhaskarakiran joined #gluster
15:09 schwing the upgrade should only affect the binaries, not the volume or it's data, right?  sorry if i'm coming off as overly paranoid
15:10 ZakWolfinger That has always been my experience when upgrading gluster.  However I still highly recommend having valid, TESTED backups of your data.  Just in case.
15:12 jobewan joined #gluster
15:14 meghanam joined #gluster
15:15 schwing i have three known good copies of the data (using gluster to replace one of them) but copying the 14TB data set is slow and would like to avoid at all possible
15:17 deniszh left #gluster
15:23 ZakWolfinger Sounds like you are ready to me.
15:23 ZakWolfinger For this approach, schedule a downtime and prevent all your clients from accessing ( umount your volumes, stop gluster Volumes..etc)the servers.
15:23 ZakWolfinger 1. Stop all glusterd, glusterfsd and glusterfs processes on your server.
15:23 ZakWolfinger 2. Install  GlusterFS 3.6.0
15:23 ZakWolfinger 3. Start glusterd.
15:23 ZakWolfinger 4. Ensure that all started volumes have processes online in “gluster volume status”.
15:23 ZakWolfinger You would need to repeat these steps on all servers that form your trusted storage pool.
15:23 ZakWolfinger After upgrading the servers, it is recommended to upgrade all client installations to 3.6.0
15:24 ZakWolfinger do you use quota or geo replication?
15:24 schwing no
15:24 ZakWolfinger You should be good to go.
15:24 schwing geo replication sounds cool, but i need to get moved in to this solution first
15:27 schwing my biggest issue is how slow my install is.  writes over the network are around 35Mbps which is quite painful.  hoping upgrading will at least help improve that.  we have about 25 million small files
15:28 schwing when i was doing my initial data push to it, it was around 300Mbps
15:29 ZakWolfinger Unfortunately performance issues are a bit beyond my level of expertise.  Perhaps someone else can comment on that.....
15:29 glusterbot News from newglusterbugs: [Bug 1214822] Disperse volume: linux tarball untar fails on a disperse volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214822>
15:30 schwing mine, too.  :)  still pretty new to this
15:31 glusterbot News from resolvedglusterbugs: [Bug 1212830] Data Tiering: AFR(replica) self-heal deamon details go missing on attach-tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212830>
15:39 Twistedgrim joined #gluster
15:39 shpank left #gluster
15:40 Twistedgrim Anyone around that has a sec to help me sanity check my troubleshooting? I have a gluster client that is dropping the mount point randomly. I think maybe XATTR issues?
15:40 Twistedgrim can someone PM me that might know?
15:41 nbalacha joined #gluster
15:51 hagarth joined #gluster
15:53 JoeJulian hexa-: No idea. It's supposed to adjust the op-version automatically, clearly that's failing. I would probably set it to the op-version being requested.
15:53 hexa- JoeJulian: ok, thanks
15:54 JoeJulian ZakWolfinger: To diagnose that, I would check the ,,(extended attributes) on the files that exist on the servers and try to find inconsistencies. The client errors should also say which brick the file wasn't on, so maybe that'll be a clue. Also, check the brick log around the same timestamp.
15:54 glusterbot ZakWolfinger: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
15:55 JoeJulian yossarianuk: There's no way to convert a replica volume to a geo-replicated one. They're entirely different things.
15:57 DV joined #gluster
15:58 JoeJulian Twistedgrim: Randomly dropped mountpoint sounds like either a bug or oom killer. Use fpaste.org to show a client log example of when the mount point drops.
16:05 yossarianuk JoeJulian: cheers - so I'll have to delete existing and build new geo - thanks!
16:08 huleboer joined #gluster
16:10 jbrooks joined #gluster
16:11 raghug joined #gluster
16:18 DV joined #gluster
16:32 R0ok_ joined #gluster
16:33 virusuy joined #gluster
16:33 virusuy joined #gluster
16:52 huleboer joined #gluster
16:53 ZakWolfinger joined #gluster
16:56 RameshN joined #gluster
17:11 kshlm joined #gluster
17:15 vikumar joined #gluster
17:15 Rapture joined #gluster
17:18 vimal joined #gluster
17:19 Pupeno joined #gluster
17:22 ZakWolfinger joined #gluster
17:26 jbrooks joined #gluster
17:31 kovsheni_ joined #gluster
17:33 ZakWolfinger joined #gluster
17:39 roost joined #gluster
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB      0xB
17:39 kovshenin joined #gluster
17:39 Bardack 0xB                                    0xB      0xB
17:39 Bardack 0xB                                    0xB + [irc] connecting to the server                                                                                                                                                                     0xB
17:39 Bardack 0xB                                    0xB + [irc] logged in                                                                                                                                                                                    0xB
17:39 Bardack hum ... sorry
17:46 kovsheni_ joined #gluster
17:47 virusuy joined #gluster
17:48 kovshenin joined #gluster
17:50 the-me joined #gluster
17:51 kovshenin joined #gluster
17:57 kovsheni_ joined #gluster
18:03 kovshenin joined #gluster
18:06 kovshenin joined #gluster
18:12 kovsheni_ joined #gluster
18:14 kovshenin joined #gluster
18:17 kovsheni_ joined #gluster
18:17 ashiq joined #gluster
18:19 zerick_ joined #gluster
18:20 hchiramm__ joined #gluster
18:27 side_con1rol joined #gluster
18:29 LebedevRI joined #gluster
18:30 kovshenin joined #gluster
18:36 jrdn joined #gluster
18:46 jobewan joined #gluster
18:53 ashiq joined #gluster
18:54 smoothbutta joined #gluster
18:57 kripper joined #gluster
18:57 jiku joined #gluster
18:58 lalatenduM joined #gluster
18:59 side_control joined #gluster
18:59 jiku joined #gluster
19:01 kripper gluster is not starting anymore: http://pastebin.com/6AcJpKZL
19:01 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:02 kripper Here are the logs: http://fpaste.org/214894/42981576/
19:04 jiku joined #gluster
19:05 kripper any hint?
19:06 jiku joined #gluster
19:07 jiku joined #gluster
19:08 kripper for the records, I umounted a vol from another host
19:08 kripper now gluter starts fine again
19:09 LebedevRI joined #gluster
19:09 kripper why is it that another host is able to stop gluster from running?
19:10 schwing i saw a lot of entries in your pastebin that referred to nfs, so was confused that this was the server.  (i'm still new to gluster)
19:11 schwing line #12 is kinda interesting, too
19:11 kripper gluster died again
19:11 kripper it was online some seconds
19:12 kripper there is a "0-management: Failed uuid to hostname conversion" error
19:12 jiku joined #gluster
19:14 schwing did you change hostnames of the peers or anything?
19:14 kripper the thing with the other host was false alarm, gluster starts on-line and then disconnects
19:14 kripper no
19:15 schwing will it stay alive with no clients connected?
19:16 jiku joined #gluster
19:17 schwing maybe check 'gluster peer status' and see if the uuids there match any errors in the logs
19:19 kmai007 joined #gluster
19:20 kmai007 hi guys, have you ever seen a glusterfs-fuse mount add load to a server, that is not expected....
19:20 kmai007 the strange part is, the same volume is mounted on other servers and the load is not present there.....
19:21 kmai007 i suppose, i should strace the fuse pid....when mounted....
19:24 kripper no mismatch
19:25 kmai007 no errors about mismatch
19:27 lifeofguenter joined #gluster
19:29 jiku joined #gluster
19:30 glusterbot News from newglusterbugs: [Bug 1214888] copy config during setup, clean /etc/cluster during teardown <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214888>
19:45 Rapture joined #gluster
19:59 jbrooks joined #gluster
20:08 kovsheni_ joined #gluster
20:20 kovshenin joined #gluster
20:22 kovsheni_ joined #gluster
20:26 redbeard joined #gluster
20:30 Pupeno_ joined #gluster
20:30 ZakWolfinger left #gluster
20:30 glusterbot News from newglusterbugs: [Bug 1214912] Failure to recover disperse volume after add-brick failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214912>
20:31 lifeofguenter joined #gluster
20:36 cholcombe gluster: has anyone tried deploying an nfsv4 setup with ganesha?  Wondering if that's possible yet or how functional it is
20:39 kovshenin joined #gluster
20:41 kovshenin joined #gluster
20:58 wushudoin1 joined #gluster
21:00 capri joined #gluster
21:01 Gill_ joined #gluster
21:04 Pupeno joined #gluster
21:09 kovsheni_ joined #gluster
21:11 kovshen__ joined #gluster
21:14 lexi2 joined #gluster
21:15 kovshenin joined #gluster
21:18 Pupeno_ joined #gluster
21:19 kovshenin joined #gluster
21:20 Pupeno joined #gluster
21:21 kovsheni_ joined #gluster
21:24 kovshenin joined #gluster
21:27 kovshenin joined #gluster
21:31 Pupeno_ joined #gluster
21:35 Pupeno joined #gluster
21:37 kovsheni_ joined #gluster
21:39 Pupeno_ joined #gluster
21:40 Pupeno joined #gluster
21:58 Pupeno_ joined #gluster
22:03 wkf joined #gluster
22:06 wkf joined #gluster
22:14 jbrooks joined #gluster
22:18 rafi joined #gluster
22:33 jbrooks joined #gluster
22:37 jbrooks_ joined #gluster
22:40 Pupeno joined #gluster
22:57 kripper joined #gluster
23:12 kripper Hi, one of my gluster bricks was on a mount that unmounted
23:12 kripper I know have files on the mounted disk and also inside the (umounted) /mnt/disk directory
23:13 kripper should I merge both directories or just leave the last one?
23:18 kripper I just left the old copy asuming that gluster didn't really start or change the brick after the disk was unmounted (it just created some files)...and gluster started fine
23:20 cholcombe joined #gluster
23:22 kovshenin joined #gluster
23:45 JoeJulian kripper: I would probably do a "gluster volume heal $volname full" just to be sure.
23:45 JoeJulian kripper: What version are you running? Any modern version won't do that.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary