Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 guhcampos joined #gluster
00:09 guhcampos_ joined #gluster
00:32 pampan I'm gonna take my chances and upgrade from 3.5.1 to 3.5.7 (no debian package for 3.5.(8|9)). Is anyone aware if I can do this in a rolling fashion? Say: stop gluster on one node, upgrade it, put it back on the cluster... or do all the nodes on the cluster have to be running the same version of the server?
00:39 twm left #gluster
00:44 shdeng joined #gluster
00:50 firemanxbr joined #gluster
00:56 firemanxbr joined #gluster
01:01 firemanxbr joined #gluster
01:01 wadeholler joined #gluster
01:05 kramdoss_ joined #gluster
01:13 Alghost joined #gluster
01:28 rafaels joined #gluster
01:31 Sue joined #gluster
01:36 paul98 joined #gluster
01:40 Lee1092 joined #gluster
01:41 JoeJulian @ppa
01:41 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
01:42 JoeJulian pampan: Yes, 3.5.9 is in our ppa.
01:46 JoeJulian pampan: And yes, upgrade servers (wait for self-heal to finish), then upgrade clients (unmount and mount).
01:46 JoeJulian pampan: Poorly worded. Upgrade each server and wait for self-heal to finish before going on to the next server.
01:46 JoeJulian After all the servers are upgraded, upgrade the clients.
01:46 JoeJulian The client must be remounted to load the new binary.
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:07 pampan Thanks JoeJulian!
02:07 pampan I'm actually using Debian, so no ppa there
02:10 pampan does the client really need to be upgraded too?
02:11 gem joined #gluster
02:23 paul98 joined #gluster
02:23 Klas joined #gluster
02:28 paul98 joined #gluster
02:32 om joined #gluster
02:44 wadeholler joined #gluster
02:48 poornimag joined #gluster
02:49 JoeJulian kkeithley: Do you know who was building debs? 3.5.8 & 9 are missing their builds.
03:00 magrawal joined #gluster
03:09 kramdoss_ joined #gluster
03:21 luizcpg joined #gluster
03:24 firemanxbr joined #gluster
03:29 Bhaskarakiran joined #gluster
03:30 sakshi joined #gluster
03:40 siel joined #gluster
03:43 jiffin joined #gluster
03:51 hagarth joined #gluster
03:53 Bhaskarakiran joined #gluster
03:57 overclk joined #gluster
04:05 Bhaskarakiran joined #gluster
04:08 Apeksha joined #gluster
04:10 ppai joined #gluster
04:17 darylllee joined #gluster
04:21 darylllee Hello,  new here but had a couple questions and I'm hoping someone can answer.   Not sure on the ettiquet so hoping i'm not doing this wrong.    The first was if anyone knows if there is some KVM/QEMU incompatibility with GlusterFS client 3.7.12 that wasn't there in 3.7.1.   I switched repos from the main one to the storageinterestgroup and noticed it tossing the error:  error: failed to
04:21 darylllee initialize gluster connection to server: 'gluster1': No such file or directory  while it works fine on the 3.7.1 version.
04:29 RameshN joined #gluster
04:39 atinm joined #gluster
04:43 aspandey joined #gluster
04:44 Bhaskarakiran joined #gluster
04:46 ramky joined #gluster
04:47 JoeJulian darylllee: The SIG repo and our interaction with it is new, so I'm not sure. It sounds like, perhaps, some plugin to libvirt or qemu is missing.
04:48 Bhaskarakiran joined #gluster
04:48 darylllee The systems are identical anisbal deploymentes except for the repo I use (standard or SIG).  So probably something related to that
04:49 JoeJulian I'm looking to see what I can find.
04:50 darylllee Thanks :)
04:51 JoeJulian I wonder if you need the virt sig, http://mirror.centos.org/centos/7/virt/x86_64/kvm-​common/centos-release-qemu-ev-1.0-1.el7.noarch.rpm
04:52 JoeJulian Either that or some package that's not "required" by qemu, but is needed for gluster volumes.
04:52 JoeJulian fpaste "rpm -qa 'gluster*'"
04:55 darylllee ill take a look.   The best I can see with something simple like qemu-img create is that it sees the volumes offline while the older version doesn't.
04:56 nehar joined #gluster
04:58 kdhananjay joined #gluster
05:01 DaiDV joined #gluster
05:02 DaiDV Hi
05:02 glusterbot DaiDV: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:09 kramdoss_ joined #gluster
05:11 shubhendu joined #gluster
05:17 satya4ever joined #gluster
05:21 prasanth joined #gluster
05:21 Manikandan joined #gluster
05:23 jiffin1 joined #gluster
05:24 darylllee I had just tried updating everything to 3.8 and that error goes away.   Something to do with 3.7.12 I suppose.   Thanks for tossing me some stuff JoeJulian
05:29 baojg joined #gluster
05:33 ndarshan joined #gluster
05:36 sakshi joined #gluster
05:36 [diablo] joined #gluster
05:39 hgowtham joined #gluster
05:42 satya4ever joined #gluster
05:42 darylllee joined #gluster
05:43 Saravanakmr joined #gluster
05:44 nehar_ joined #gluster
05:45 sakshi joined #gluster
05:47 plarsen joined #gluster
05:47 anil joined #gluster
05:49 atinm joined #gluster
05:53 [diablo] joined #gluster
05:53 Ikke joined #gluster
05:53 ashiq joined #gluster
05:53 atalur joined #gluster
05:55 F2Knight joined #gluster
05:56 karthik___ joined #gluster
05:56 spalai joined #gluster
05:57 aravindavk_ joined #gluster
06:00 Alghost_ joined #gluster
06:01 itisravi joined #gluster
06:01 micke joined #gluster
06:02 baojg joined #gluster
06:05 prasanth joined #gluster
06:05 ppai joined #gluster
06:12 gowtham joined #gluster
06:15 kotreshhr joined #gluster
06:17 atinm joined #gluster
06:18 itisravi joined #gluster
06:19 Dogethrower joined #gluster
06:20 skoduri joined #gluster
06:22 kshlm joined #gluster
06:22 nishanth joined #gluster
06:22 jtux joined #gluster
06:24 msvbhat joined #gluster
06:27 rafi joined #gluster
06:29 merlink joined #gluster
06:47 pur joined #gluster
06:57 kaushal_ joined #gluster
07:02 d0nn1e joined #gluster
07:06 baojg joined #gluster
07:08 jri joined #gluster
07:17 karnan joined #gluster
07:19 Wizek joined #gluster
07:24 sakshi joined #gluster
07:26 MikeLupe joined #gluster
07:27 DaiDV joined #gluster
07:31 ghenry joined #gluster
07:32 rouven joined #gluster
07:45 jwd joined #gluster
07:50 deniszh joined #gluster
07:57 karthik___ joined #gluster
07:59 Alghost joined #gluster
08:00 karnan joined #gluster
08:01 ivan_rossi joined #gluster
08:02 prasanth joined #gluster
08:03 nehar_ joined #gluster
08:14 Slashman joined #gluster
08:15 Bhaskarakiran joined #gluster
08:16 kdhananjay joined #gluster
08:17 rjoseph joined #gluster
08:25 Alghost_ joined #gluster
08:33 itisravi joined #gluster
08:33 [diablo] joined #gluster
08:36 aravindavk joined #gluster
08:36 Klas hi, I've got a three-node lab setup with a volume shared between three nodes, one which is just an arbiter
08:37 Klas for some reason, the arbiter is showing one of the nodes as disconnected
08:37 Klas 1, 2 and 3
08:37 Klas 1: 2 and 3 connected
08:37 Klas 2: 1 and 3 connected
08:37 Klas 3: 1 disconnected and 2 connected
08:38 Klas I have not been terribly kind when testing quorum, so I have shut the machines down in ugly ways (vmware shut off)
08:39 baojg joined #gluster
08:43 [Enrico] joined #gluster
08:44 Klas apparrantly, a reboot solved it
08:44 Klas I dislike that "solution" =P
08:46 Klas oh, look, now the other two nodes consider 3 to be down instead
08:46 Ulrar Oh wow, I was about to install 3.7.12 but looking at the last 3 mails from Lindsay it's a bit scary
08:49 gem joined #gluster
08:49 rafi1 joined #gluster
08:58 kotreshhr joined #gluster
09:04 nehar_ joined #gluster
09:06 prasanth joined #gluster
09:09 spalai left #gluster
09:09 hackman joined #gluster
09:18 sakshi joined #gluster
09:25 spalai joined #gluster
09:32 sakshi joined #gluster
09:33 skoduri joined #gluster
09:34 jiffin1 joined #gluster
09:45 sakshi joined #gluster
09:52 karnan joined #gluster
09:54 ashiq joined #gluster
09:59 hgowtham joined #gluster
09:59 spalai left #gluster
09:59 spalai joined #gluster
10:01 sandersr joined #gluster
10:08 Gnomethrower joined #gluster
10:13 aravindavk joined #gluster
10:17 hgowtham joined #gluster
10:24 baojg joined #gluster
10:27 abk joined #gluster
10:27 msvbhat joined #gluster
10:29 aspandey joined #gluster
10:31 martin_pb joined #gluster
10:40 Bhaskarakiran joined #gluster
10:43 kotreshhr joined #gluster
10:43 martin_pb joined #gluster
10:48 cloph hi * - when using rsync to copy data to a glusterfs mount, I get tons of acl warnings: http://pastie.org/10894011 - rsync is only invoked with -a , no -A - any way to silence that?
10:48 glusterbot Title: #10894011 - Pastie (at pastie.org)
10:48 martin_pb is better to add a separated new network interface for communicating two nodes to eche other, or not?
10:48 hgowtham joined #gluster
10:49 martin_pb of course, i mean for the gluster
10:50 msvbhat joined #gluster
10:53 cloph ah, found solution to my prob - using --inplace makes the warnings go away (and also is less stressful on the volume I suppose, less renames are good...
10:53 cloph if it is just a separate network interface and not a separate link, it is only marginally better
10:57 aspandey joined #gluster
10:59 prasanth joined #gluster
11:00 nehar_ joined #gluster
11:00 johnmilton joined #gluster
11:10 spalai left #gluster
11:13 martin_pb what do you mean like ¨separate link¨
11:14 Klas physical and logical
11:14 Klas dedicated switch net and so forth
11:15 Klas the biggest issue with disk clusters in the same subnet as other things is that there is an increased risk for collisions and congestions
11:15 Klas so, separate switch, in a seperate subnet, is always preferable
11:16 martin_pb yes, it will be located on a separated vlan
11:17 Klas then it's advantageous
11:18 martin_pb because we have very slow writing speed for small files..
11:19 Klas ah
11:19 Klas that is a general issue with gluster, from what I've read
11:21 martin_pb yes i know too. This is the main reason why we try to make some changes..
11:22 martin_pb Thank you for your help
11:29 hchiramm joined #gluster
11:29 rafi joined #gluster
11:30 kkeithley [22:49:14] <JoeJulian> kkeithley: Do you know who was building debs? 3.5.8 & 9 are missing their builds.
11:31 kkeithley Until I started doping it it was semiosis.  That may have been around the gap between when he stopped and I started
11:31 kkeithley s/doping/doing/
11:31 glusterbot What kkeithley meant to say was: Until I started doing it it was semiosis.  That may have been around the gap between when he stopped and I started
11:32 kkeithley erm, no, that doesn't seem right
11:35 kkeithley Looks like they just weren't ever built.
11:36 rafi joined #gluster
11:37 gem joined #gluster
11:37 luizcpg joined #gluster
11:38 rafi1 joined #gluster
11:40 poornimag joined #gluster
11:41 ppai joined #gluster
11:42 jiffin joined #gluster
11:56 surabhi joined #gluster
11:58 karthik___ joined #gluster
11:58 poornimag joined #gluster
11:59 kotreshhr joined #gluster
11:59 kshlm Weekly community meeting starts in 1 minute in #gluster-meeting
12:02 gem joined #gluster
12:04 rafaels joined #gluster
12:08 cloph when syncing lots of files to a gluster volume using rsync - is it advisable to use --delay-updates or --delete-after ?
12:08 cloph or is it better to have the renames/deletions mixed with file creation?
12:20 rouven hey, i just upgraded from 3.7.8 to 3.7.12 on centos 7.2 and now my glusterd doesn't come up cleanly and my volumes are unavailable
12:21 rouven [2016-06-29 12:13:19.026529] E [rpc-transport.c:292:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.7.12/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
12:21 rouven is one of the messages
12:23 post-factum rouven: it is not the error you are looking for
12:23 post-factum grep log further
12:23 rouven 2016-06-29 12:22:27.160869] W [glusterfsd.c:1251:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f6a57a00dc5] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x7f6a5907a915] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7f6a5
12:24 glusterbot rouven: ('s karma is now -144
12:24 rouven 907a78b] ) 0-: received signum (15), shutting down
12:24 rouven this one?
12:24 rouven oops
12:25 rouven no, this one seems to be the shutdown failing
12:27 rouven https://paste.gnome.org/pledyowlb
12:27 glusterbot Title: GNOME Pastebin (at paste.gnome.org)
12:28 rouven the second peer is down
12:32 rouven post-factum: any hints on what might have gone wrong?
12:34 post-factum is that full log?
12:34 rouven after restart of glusterd and glusterfsd
12:34 rouven so, yes
12:35 rouven attached the gluster volume info screen as a comment to that paste
12:35 rouven s/info/status
12:35 post-factum is 10.247.12.12 down? is that second node?
12:36 rouven yepp
12:41 rouven post-factum: journal says glusterd.service failed but i can't see why
12:41 post-factum everything is in text logs
12:42 rouven ok, systemd also thinks it's running ok
12:49 Saravanakmr joined #gluster
12:59 rafaels joined #gluster
13:03 guhcampos joined #gluster
13:06 Jules-2 joined #gluster
13:08 ira joined #gluster
13:08 rouven post-factum: does it need the second replica to bring up the volume?
13:08 rouven rebooted the machine still no difference
13:09 alvinstarr left #gluster
13:09 alvinstarr joined #gluster
13:10 gluco joined #gluster
13:11 jiffin joined #gluster
13:15 post-factum rouven: it might
13:16 post-factum afaik, glusterd won't start without second node up
13:16 kshlm post-factum, GlusterD starts up.
13:17 kshlm But it needs the other node to start bricks.
13:17 post-factum oh, then you need to start volume with force
13:17 kshlm It's a silly bug, that's been know for some time. We've not gotten around to fixing it yet.
13:17 kshlm s/know/known/
13:17 glusterbot What kshlm meant to say was: It's a silly bug, that's been known for some time. We've not gotten around to fixing it yet.
13:18 post-factum gluster volume start foobar force
13:19 rouven post-factum: that did the trick, thanks
13:19 post-factum np
13:25 Klas is that even a bug?
13:25 Klas it sounds like sane countermeasure to avoid split-brain?
13:37 rouven are there any 32bit packages for debian jessie out there somwhere?
13:38 rouven http://download.gluster.org/pub/gluster/gluster​fs/LATEST/Debian/jessie/apt/dists/jessie/main/
13:38 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST/Deb​ian/jessie/apt/dists/jessie/main (at download.gluster.org)
13:46 jwd joined #gluster
13:49 dnunez joined #gluster
13:53 hi11111 joined #gluster
13:54 ahino joined #gluster
14:05 muneerse2 joined #gluster
14:06 squizzi joined #gluster
14:08 arcolife joined #gluster
14:08 arcolife joined #gluster
14:12 hagarth joined #gluster
14:14 bowhunter joined #gluster
14:18 bowhunter joined #gluster
14:19 plarsen joined #gluster
14:20 martinetd I'm sure this has been asked dozens of times already, but is there a sane way to have systemd schedule a glusterfs (fuse) mount for a server running on localhost after the server is up and running?
14:20 martinetd Ordering the mount after glusterd.service is not enough as the service doesn't have any way to signal the volumes are all ready
14:26 MikeLupe joined #gluster
14:28 martinetd I'd rather avoid having a dummy sleep 10 oneshot service in between..
14:29 ben453 joined #gluster
14:30 F2Knight joined #gluster
14:49 ramky joined #gluster
14:55 JoeJulian martinetd: I've not had a problem with mount ordering with systemd.
14:56 chirino joined #gluster
14:57 rafaels joined #gluster
15:00 kshlm joined #gluster
15:08 wushudoin joined #gluster
15:10 jiffin1 joined #gluster
15:12 hgowtham joined #gluster
15:25 hchiramm joined #gluster
15:26 chirino_m joined #gluster
15:37 paul98 urgh man i hate windows!
15:38 jiffin joined #gluster
15:39 paul98 should i a windows iscsi arget be able to write to a brick on glusterfs? as i'm struggling, I can get it writing to one brick but not replicating, but when i map a network drive from linux it works and replicates, do i need to map the iscsi lun to the /dev/storage or to the partition e.g /storage/windows
15:48 JoeJulian Nothing but gluster should write to a brick.
15:50 paul98 so how does iscsi work
15:51 paul98 to replicate between the two
15:51 paul98 as it's client side the replication isn't it
15:51 JoeJulian I believe work was done to use libgfapi with tgtd. I don't have the problem of using Windows, so I haven't had the displeasure of needing iscsi.
15:52 JoeJulian See if googling "iscsi libgfapi" produces any usable results.
15:53 baojg joined #gluster
15:55 paul98 thanks JoeJulian
15:55 takarider joined #gluster
15:55 paul98 i would have got rid of windows perosnally
15:55 JoeJulian Yeah, I know what you meen.
15:55 JoeJulian mean
15:55 * JoeJulian needs coffee...
15:56 kpease joined #gluster
15:57 cloph JoeJulian: can you elaborate on the mount-ordering? what would I have to use to have mount wait for the local gusterfs-server ?
15:58 cloph x-systemd.requires=glusterfs-server.service in /etc/fstab doesn't work for me (debian 8)
16:07 jiffin joined #gluster
16:12 jiffin joined #gluster
16:19 kramdoss_ joined #gluster
16:32 yopp left #gluster
16:38 martinetd JoeJulian: My setup is two replicated servers; both servers try to mount ip1,ip2:/volume ; if both servers are down and I boot just one it will almost always fail, second one will usually work
16:39 JoeJulian Ah, that's probably because the brick processes don't start until the servers have quorum. That's, imho, a bug in that it does that by default.
16:41 martinetd So I could change some setting to start right away and it should work™ ?
16:43 hagarth joined #gluster
16:43 ivan_rossi left #gluster
16:44 martinetd hmm I have cluster.quorum-type set to none, do you happen to know the exact setting?
16:47 JoeJulian There's no setting that would fix that. The only solution I can think of would be to add a drop-in with an extra ExecStart=/usr/bin/gluster volume start $volname force
16:47 JoeJulian You know about systemd drop-ins, right?
16:51 hackman joined #gluster
16:52 cloph drop-ins like whatever.service.d/overrides.conf or whatever.unit.requires/linktootherunit?
16:53 martinetd Yes
16:55 martinetd I'll think about it over night, thanks
17:15 skylar joined #gluster
17:31 karnan joined #gluster
17:38 julim joined #gluster
17:49 shubhendu joined #gluster
18:10 jwd joined #gluster
18:25 kovshenin joined #gluster
18:36 jri joined #gluster
18:41 wushudoin joined #gluster
18:47 julim joined #gluster
19:03 fale joined #gluster
19:04 Iouns joined #gluster
19:04 delhage joined #gluster
19:21 deniszh joined #gluster
19:26 jiffin joined #gluster
20:06 kovshenin joined #gluster
20:19 julim joined #gluster
20:21 JonathanD joined #gluster
20:45 owlbot joined #gluster
20:50 hagarth joined #gluster
20:57 morgbin joined #gluster
21:08 Hanefr joined #gluster
21:28 owlbot joined #gluster
21:40 d0nn1e joined #gluster
22:38 hagarth joined #gluster
22:44 pampan joined #gluster
22:57 firemanxbr joined #gluster
23:02 pampan Hi guys! Me again, continuing this oddysey of making syncing work. Right now basically for every operation I want to execute, gluster cli is returning 'Another transaction is in progress. Please try again after sometime.'. It's been like that for more than 12 hours, so I suspect something is not entirely. Can you guys think of a way of knowing what this 'anothe transaction' is doing?
23:02 JoeJulian You could probably pull it out of a state dump.
23:04 pampan JoeJulian: thanks for the hint, I'll try to figure it out
23:04 sage joined #gluster
23:09 martin_pb joined #gluster
23:32 luizcpg joined #gluster
23:32 hagarth joined #gluster
23:43 luizcpg left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary