Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 hagarth joined #gluster
00:27 doctorray joined #gluster
00:28 MugginsM joined #gluster
00:29 side_control joined #gluster
00:44 rwheeler joined #gluster
00:45 MugginsM joined #gluster
00:48 nangthang joined #gluster
01:02 TheCthulhu3 joined #gluster
01:09 Pupeno joined #gluster
01:11 calavera joined #gluster
01:17 TheCthulhu joined #gluster
01:29 harish joined #gluster
01:32 MugginsM joined #gluster
01:59 nangthang joined #gluster
02:29 shubhendu joined #gluster
02:52 shaunm_ joined #gluster
02:56 overclk joined #gluster
03:09 Pupeno joined #gluster
03:14 jcastill1 joined #gluster
03:19 jcastillo joined #gluster
03:20 jbrooks joined #gluster
03:20 sripathi1 joined #gluster
03:31 TheSeven joined #gluster
03:31 julim joined #gluster
03:34 atinm joined #gluster
03:36 kdhananjay joined #gluster
03:43 meghanam joined #gluster
03:48 nishanth joined #gluster
04:05 spalai left #gluster
04:07 RameshN joined #gluster
04:08 kanagaraj joined #gluster
04:09 itisravi joined #gluster
04:10 shubhendu joined #gluster
04:19 sakshi joined #gluster
04:25 maveric_amitc_ joined #gluster
04:27 anmol joined #gluster
04:27 RameshN joined #gluster
04:30 ghenry joined #gluster
04:30 ghenry joined #gluster
04:36 raghug joined #gluster
04:36 nbalacha joined #gluster
04:39 bharata-rao joined #gluster
04:45 rafi joined #gluster
04:46 hagarth joined #gluster
04:46 vimal joined #gluster
04:47 ramteid joined #gluster
04:54 spalai joined #gluster
04:54 spalai left #gluster
04:59 soumya_ joined #gluster
05:00 gem joined #gluster
05:04 deepakcs joined #gluster
05:06 ndarshan joined #gluster
05:08 dusmant joined #gluster
05:08 jiffin joined #gluster
05:10 smohan joined #gluster
05:15 gildub joined #gluster
05:16 Manikandan joined #gluster
05:19 cyberbootje joined #gluster
05:23 ashiq joined #gluster
05:25 nishanth joined #gluster
05:25 deepakcs joined #gluster
05:26 pppp joined #gluster
05:26 raghu joined #gluster
05:29 arcolife joined #gluster
05:30 kshlm joined #gluster
05:39 vmallika joined #gluster
05:41 rafi joined #gluster
05:45 spandit joined #gluster
05:46 spandit_ joined #gluster
05:47 anil joined #gluster
05:52 deepakcs joined #gluster
05:57 spalai joined #gluster
05:57 DV joined #gluster
06:01 pppp joined #gluster
06:03 saltsa joined #gluster
06:06 Pupeno joined #gluster
06:07 hagarth left #gluster
06:08 voleatech joined #gluster
06:12 kdhananjay joined #gluster
06:15 maveric_amitc_ joined #gluster
06:17 spalai1 joined #gluster
06:18 Trefex joined #gluster
06:21 Saravana_ joined #gluster
06:25 jtux joined #gluster
06:26 Bhaskarakiran joined #gluster
06:27 DV joined #gluster
06:32 jtux joined #gluster
06:40 ramky joined #gluster
06:43 anrao joined #gluster
07:00 ChrisNBlum joined #gluster
07:05 rjoseph joined #gluster
07:06 ramkrsna joined #gluster
07:08 astr joined #gluster
07:10 [Enrico] joined #gluster
07:16 LebedevRI joined #gluster
07:17 kotreshhr joined #gluster
07:19 dgbaley joined #gluster
07:19 dgbaley Hey, I'm using EPEL on CentOS 7. qemu isn't linked to libgfapi but qemu-img is. Does that sound right? Is there a place to get qemu with libgfapi?
07:27 itisravi joined #gluster
07:34 mbukatov joined #gluster
07:35 atalur joined #gluster
07:37 atalur joined #gluster
07:39 atalur joined #gluster
07:40 fsimonce joined #gluster
07:44 kokopelli joined #gluster
07:45 kokopelli Hi all, after upgrade to 3.7 logrotate is not runs, any idea?
07:47 jcastill1 joined #gluster
07:48 Trefex joined #gluster
07:52 jcastillo joined #gluster
07:58 ctria joined #gluster
08:01 Trefex1 joined #gluster
08:02 sripathi joined #gluster
08:02 Trefex2 joined #gluster
08:11 pethani joined #gluster
08:11 glusterbot News from newglusterbugs: [Bug 1240524] configuration.so seems to be missing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1240524>
08:11 glusterbot News from newglusterbugs: [Bug 1200914] pathinfo is wrong for striped replicated volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200914>
08:12 fsimonce joined #gluster
08:15 pethani hello, i want to evaluate glusterfs performance as master project in uni.
08:16 pethani is there any benchmark available for it?
08:16 pethani evaluation is for big storage.
08:25 jmcantrell left #gluster
08:28 nsoffer joined #gluster
08:28 garmstrong joined #gluster
08:28 harish_ joined #gluster
08:30 ajames-41678 joined #gluster
08:33 hagarth joined #gluster
08:36 itisravi joined #gluster
08:40 ajames-41678 Hi folks, I am having some issues with a 3 node gluster unable to run 'gluster vol status' and self heal log 'failed to fetch volume file'. There appears to be a lock, but I cannot figure out where? Any takers :)
08:41 glusterbot News from newglusterbugs: [Bug 1230523] glusterd: glusterd crashing if you run  re-balance and vol status  command parallely. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230523>
08:44 hagarth ajames-41678: did you restart glusterd on all nodes?
08:44 ajames-41678 not yet
08:44 ajames-41678 what impact woudl that have?
08:45 Slashman joined #gluster
08:45 ajames-41678 fyi I am using ubuntu 14.04 - service glusterfs-server restart?
08:46 ajames-41678 there appears not to be a seperate glusterd service on 14.04 (installed from ppa)
08:49 kokopelli I use old glusterfs-server service file in init.d, it's running
08:50 ajames-41678 is restarting the service likely to impact any client mounts to the vol ?
08:50 dusmant joined #gluster
08:58 elico joined #gluster
08:58 kovshenin joined #gluster
09:01 jcastill1 joined #gluster
09:06 jcastillo joined #gluster
09:22 yazhini joined #gluster
09:23 DV joined #gluster
09:26 Manikandan joined #gluster
09:32 sakshi joined #gluster
09:34 sakshi joined #gluster
09:39 soumya_ joined #gluster
09:41 glusterbot News from newglusterbugs: [Bug 1240577] Data Tiering: Database locks observed on tiered volumes on continous writes to a file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1240577>
09:44 Trefex joined #gluster
09:49 tanuck joined #gluster
09:58 ppai joined #gluster
10:00 itisravi_ joined #gluster
10:07 jcastill1 joined #gluster
10:11 atinm joined #gluster
10:12 nbalachandran_ joined #gluster
10:12 jcastillo joined #gluster
10:15 kkeithley1 joined #gluster
10:15 karnan joined #gluster
10:16 NTQ joined #gluster
10:17 NTQ Hi. Is it possible to create backups of files which were deleted on a replicated brick?
10:23 rwheeler joined #gluster
10:28 Manikandan joined #gluster
10:36 itisravi joined #gluster
10:36 DV joined #gluster
10:41 Manikandan joined #gluster
10:44 ira joined #gluster
10:44 Lee1092 joined #gluster
10:47 kotreshhr joined #gluster
10:48 rjoseph joined #gluster
10:50 kxseven joined #gluster
10:50 elico joined #gluster
10:53 kshlm joined #gluster
10:53 pppp joined #gluster
10:54 nbalacha joined #gluster
10:54 autoditac joined #gluster
10:55 rjoseph joined #gluster
11:00 kotreshhr1 joined #gluster
11:12 glusterbot News from newglusterbugs: [Bug 1240603] glusterfsd crashed after volume start force <https://bugzilla.redhat.co​m/show_bug.cgi?id=1240603>
11:37 kotreshhr joined #gluster
11:46 rafi1 joined #gluster
11:46 hagarth joined #gluster
11:47 kshlm joined #gluster
11:48 rjoseph joined #gluster
11:52 atinm joined #gluster
11:56 rafi1 REMINDER: Gluster Community Bug Triage meeting starting in another 5 minutes in #gluster-meeting
11:59 dusmant joined #gluster
12:01 rafi joined #gluster
12:05 soumya_ joined #gluster
12:05 kdhananjay joined #gluster
12:09 jtux joined #gluster
12:12 autoditac joined #gluster
12:13 squaly joined #gluster
12:17 unclemarc joined #gluster
12:19 rjoseph joined #gluster
12:25 DV joined #gluster
12:32 maveric_amitc_ joined #gluster
12:36 anrao joined #gluster
12:37 pppp joined #gluster
12:42 ekuric joined #gluster
12:42 glusterbot News from newglusterbugs: [Bug 1233333] glusterfs-resource-agents - volume - doesn't stop all processes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233333>
12:42 Manikandan joined #gluster
12:48 jiffin NTQ: if you are referring to a feature like trash/recyclebin(stores deleted file) , we do have
12:49 NTQ jiffin: Cool. Where I can find information about this?
12:49 jiffin NTQ: http://gluster.readthedocs.org/e​n/latest/Features/trash_xlator/
12:52 theron joined #gluster
12:53 NTQ jiffin: Hm.. it seems that features.trash is not installed: "volume set: failed: option : features.trash does not exist"
12:55 msvbhat NTQ: Which version of glusterfs are you using?
12:56 msvbhat NTQ: I think it's only in newer version
12:56 NTQ 3.4.2 from the Ubuntu repository
12:57 msvbhat NTQ: Not in 3.4.2 for sure, I think the feature is from 3.7 (or maybe 3.6, I'm not sure :))
12:57 * msvbhat brb
12:57 B21956 joined #gluster
12:58 NTQ I will add the ppa from here: https://launchpad.net/~gluster​/+archive/ubuntu/glusterfs-3.6
12:58 NTQ Or better this one: https://launchpad.net/~gluster​/+archive/ubuntu/glusterfs-3.7
12:59 ajames-41678 joined #gluster
13:09 dgandhi joined #gluster
13:09 hagarth anybody using gluster with openstack here?
13:10 Manikandan joined #gluster
13:11 meghanam joined #gluster
13:12 jiffin NTQ: trash-feature is introduced in 3.7
13:12 ndevos hagarth: I wonder how many people in this channel use openstack at all, did you ask in #openstack (or whatever it is) too?
13:12 NTQ jiffin: I just upgraded to 3.7 but now I get this error: "volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again"
13:13 NTQ Of course I upgraded both sides
13:13 aaronott joined #gluster
13:13 hagarth ndevos: not sure about hitting openstack .. just sampling here atm
13:13 jiffin NTQ: all server should be updated and u may need to restart the volume
13:13 jiffin after updating
13:14 ndevos hagarth: I hope you get more samples then!
13:14 NTQ jiffin: Do I have to "gluster volume stop gv0" on both sides or just on one side?
13:14 jiffin NTQ: one server in storage pool will be enough
13:15 hagarth ndevos: thanks, I hope so too. We can even hit the gluster MLs with this question.
13:15 ndevos hagarth: if "we" means "you", then sure!
13:15 wkf joined #gluster
13:15 hagarth ndevos: I don't mind doing everything ;)
13:16 NTQ It doesn't work this way. It try to stop the volume, then restart the glusterfs-server and then I try it again.
13:16 ndevos hagarth: hey, me neither, but I'm trying to not get too involved in openstack bits just yet
13:16 jmarley joined #gluster
13:16 hagarth ndevos: there was nothing implied to rope you into that ;)
13:17 jiffin NTQ : can u please explain steps followed after the upgrade to 3.7?
13:17 ndevos hagarth: hehe, sorry I didnt bite the bait!
13:19 NTQ jiffin: On Ubuntu I did this: "service glusterfs-server restart", then "gluster volume stop gv0", "gluster volume start gv0", "gluster volume set gv0 features.trash on"
13:19 jiffin k
13:19 jiffin then it failed?
13:20 NTQ jiffin: yes. But even if I do a "service glusterfs-server stop", there are 4 remaining processes: 1 glusterfsd, and 3 glusterfs
13:20 dusmant joined #gluster
13:21 pppp joined #gluster
13:21 NTQ I guess these are old processes
13:21 haomaiwang joined #gluster
13:22 kshlm joined #gluster
13:22 haomaiwang joined #gluster
13:24 julim joined #gluster
13:25 georgeh-LT2 joined #gluster
13:26 NTQ Now it worked. Something went wrong during the last shutdown.
13:27 Saravana_ joined #gluster
13:27 meghanam joined #gluster
13:27 shyam joined #gluster
13:29 jiffin NTQ: sorry for the delayed response, when u start and stop the volume all the glusterfsd process should be restarted
13:31 jiffin NTQ: i am not familiar with ubuntu , correct me if am wrong do u have "service glusterd restart" ?
13:31 NTQ okay
13:31 NTQ Is it possible to specify the ports glusterfs should use? Then I can better configure the firewall.
13:33 jiffin NTQ: i am not familiar with ports used by glusterfs
13:34 jiffin may be http://gluster.readthedocs.org/en/late​st/Administrator%20Guide/Setting%20Up%​20Clients/index.html?highlight=ports
13:34 jiffin will be helpful
13:36 squizzi joined #gluster
13:37 doekia joined #gluster
13:41 jrm16020 joined #gluster
13:46 plarsen joined #gluster
13:47 calavera joined #gluster
13:48 haomaiwa_ joined #gluster
13:53 theron joined #gluster
14:17 kotreshhr left #gluster
14:18 ndevos @paste
14:18 glusterbot ndevos: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
14:18 ndevos maybe we should include http://termbin.com/ in there somehow?
14:19 ndevos something like this should work: "gluster volume info | nc termbin.com 999"
14:19 ndevos oh, port 9999
14:22 spalai left #gluster
14:37 coredump joined #gluster
14:42 bennyturns joined #gluster
14:42 Gill joined #gluster
14:43 soumya_ joined #gluster
14:45 shyam joined #gluster
14:57 arthurh joined #gluster
15:02 greg___ joined #gluster
15:03 greg___ hi all ... anyone have a good link for tuning a gluster install?  i followed these instructions from DO (https://www.digitalocean.com/commun​ity/tutorials/automating-the-deploy​ment-of-a-scalable-wordpress-site) and am seeing really slow performance on my wordpress site ... suspect the high CPU on my 2 file servers
15:08 kdhananjay joined #gluster
15:08 topshare joined #gluster
15:09 mckaymatt joined #gluster
15:09 topshare joined #gluster
15:09 topshare joined #gluster
15:11 shubhendu joined #gluster
15:16 jmarley joined #gluster
15:16 NTQ joined #gluster
15:19 PatNarcisoZzZ joined #gluster
15:24 arthurh joined #gluster
15:30 neofob joined #gluster
15:40 jcastill1 joined #gluster
15:45 jcastillo joined #gluster
15:46 RedW joined #gluster
15:46 PatNarcisoZzZ joined #gluster
15:46 mckaymatt joined #gluster
15:49 jdossey joined #gluster
15:50 bcicen joined #gluster
15:59 rmstar joined #gluster
16:01 cholcombe joined #gluster
16:02 shyam joined #gluster
16:05 nishanth joined #gluster
16:07 spalai1 joined #gluster
16:08 nbalacha joined #gluster
16:08 anrao joined #gluster
16:11 pranithk joined #gluster
16:15 Rapture joined #gluster
16:16 haomai___ joined #gluster
16:16 dgbaley joined #gluster
16:17 nangthang joined #gluster
16:18 arthurh joined #gluster
16:18 dgbaley I've been slamming my head against this for 2 days now. I am having permissions issues when trying to do gfapi+rdma+nonroot. With an identical volume except for transport=tcp, no issues. With rdma+root, no problem. Is this a bug?
16:20 JoeJulian dgbaley: Any logs that indicate anything?
16:22 dgbaley JoeJulian: http://pastebin.com/PLZ8gG9n
16:22 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:23 dgbaley Just that root isn't able to bind rdma_cm_id to port < 1024
16:23 dgbaley http://fpaste.org/240965/
16:28 dgbaley Eh, sorry, that non-root isn't about to bind < 1024, but that should be expected.
16:29 anoopcs dgbaley, I assume you must have faced the issue for gfapi+tcp+nonroot.
16:30 anoopcs dgbaley, ports < 1024 can only be accessed by root I guess
16:30 anoopcs dgbaley, As you said just now. :)
16:31 dgbaley anoopcs: I don't know what you mean. With tcp the client binds to an unprivileged port and I have allow-insecure on for the vol and glusterd, so everything works as expected
16:32 JoeJulian dgbaley: Reading through the source, it does look like a bug. There's logic in rpc/rpc-transport/socket/src/name.c to allow the use of ports > 1024 with the allow_insecure setting, but no such logic in rpc/rpc-transport/rdma/src/name.c
16:33 dgbaley Here's the vol info/status for each http://fpaste.org/240969/
16:33 garmstrong_ joined #gluster
16:34 rafi joined #gluster
16:34 dgbaley Oh, yay. I just moved from ubuntu to centos, but was able to rebuild qemu easily enough to link against gfapi. Maybe I can hack in a patch
16:35 dgbaley I'm trying to convince some people that we should be using gluster instead of ceph, so rdma+gfapi should be great for benchmarks
16:36 JoeJulian https://github.com/gluster/glusterfs/blob/mas​ter/rpc/rpc-transport/socket/src/name.c#L440
16:36 JoeJulian There's the logic
16:36 ndevos dgbaley: rafi is one of our rdma specialists and will surely be able to help you out patching things
16:37 ndevos dgbaley: and let us know if you have a patch and can pass it on to us so that we can include it :)
16:37 dgbaley Sure
16:38 firemanxbr joined #gluster
16:39 * rafi is looking through the logs
16:40 rafi dgbaley: by the way , there is a patch under review for "bind insecure" and "allow insecure"
16:41 rafi dgbaley: http://review.gluster.org/#/c/11512/
16:42 rafi dgbaley: which version of gluster are you running ?
16:44 dgbaley 3.7.2
16:45 dgbaley CentoOS 7, EPEL, download.gluster.org/pub/glust​er/glusterfs/LATEST/EPEL.repo/
16:45 bennyturns joined #gluster
16:45 dgbaley And I mentioned this some hours ago, but I am surpised the qemu-img in EPEL is linked against gfapi, but qemu (qemu-system-x86_64) is not
16:49 arthurh joined #gluster
16:53 gem joined #gluster
16:53 ndevos dgbaley: I think qemu(-img) comes from the RHEL/CentOS packages, not from EPEL?
16:54 ndevos dgbaley: and, recent versions should be linked against libgfapi, that gave us (Gluster community) quite some headaches a while back
16:55 gem joined #gluster
16:55 rafi dgbaley: I will get back to you,
16:55 rafi dgbaley: going for dinner ;)
16:56 pranithk joined #gluster
16:56 dgbaley ndevos: Not sure (again, very new to the rh/cent/fedora world), but when I do "yum list *qemu*, they all have the same version are the 3rd column is "epel"
16:56 ndevos rafi: enjoy!
16:56 gem joined #gluster
16:57 CyrilPeponnet joined #gluster
16:57 CyrilPeponnet joined #gluster
16:57 CyrilPeponnet joined #gluster
16:57 ndevos dgbaley: hmm, okay, I did not know epel provided qemu packages... that normally is not liked very much as the RHEL/CentOS ones are mostly preferred
16:58 CyrilPeponnet joined #gluster
16:58 CyrilPeponnet joined #gluster
16:58 gem joined #gluster
16:59 CyrilPeponnet joined #gluster
16:59 CyrilPeponnet joined #gluster
17:00 rotbeard joined #gluster
17:00 jblack joined #gluster
17:00 gem joined #gluster
17:02 jblack Good morning!  Presuming a potential two node cluster with members A and B... is there a way to ascertain, from A, whether B is ready for a peering relationship?
17:02 gem joined #gluster
17:05 gem joined #gluster
17:06 ndevos jblack: this should return only "localhost", from A: gluster --remote-host=B pool list
17:06 ndevos I did not test that, not sure if it works :)
17:07 gem joined #gluster
17:08 jblack To expand slightly,  I'm doing a deployment on AWS with chef.  Two instances, A and B, come up at effectively the same time, and they need to peer with one another.  The first ready node bombs out (as the other node isn't ready yet).
17:08 JoeJulian Your first problem is "chef" ;)
17:08 jblack I'm trying to figure out a way to figure out that both nodes are ready prior to establishing a peering reltionship
17:09 gem joined #gluster
17:09 JoeJulian Check that you can establish a tcp connection to 24007
17:09 JoeJulian If you can, it's listening.
17:09 jblack That's easily done.
17:09 jblack perfect, thanks. =)
17:10 JoeJulian Then switch to salt. ;)
17:12 PeterA joined #gluster
17:12 PeterA what does  .2015-07-07 16:47:39.800351] E [glusterd-utils.c:5140:glust​erd_add_inode_size_to_dict] 0-management: xfs_info exited with non-zero exit status
17:12 PeterA means?
17:13 JoeJulian You're not using xfs for your bricks?
17:13 PeterA i am
17:13 jblack The consulting world doesn't quite work that way.
17:13 JoeJulian jblack: I'm just being facetious.
17:14 mpietersen joined #gluster
17:14 PeterA with my filesystem on xfs, wonder why still getting those msg
17:15 mpietersen has anyone run into an issue with the following error:
17:15 mpietersen gluster system:: execute gsec_create
17:15 mpietersen Staging failed on 10.x.x.x . Error: gsync peer_gsec_create command not found.
17:16 mckaymatt joined #gluster
17:16 mpietersen i've set up a volume with peers just fine, however trying to set up geo-rep it's giving me a bunch of issues
17:16 mpietersen ver. 3.7.2
17:16 JoeJulian PeterA: run xfs_info against the bricks by hand and see if there's a non-zero exit status?
17:17 PeterA just did and got a 0 exist
17:18 PeterA http://pastie.org/10278008
17:19 ndevos PeterA: got the xfs_info binary in $PATH?
17:19 PeterA yes
17:20 PeterA http://pastie.org/10278010
17:20 PeterA that is pretty much the only error i got after patched from 3.5.2 to 3.5.4
17:20 ndevos hmm, doesnt look very exciting...
17:20 ndevos PeterA: maybe SElinux?
17:20 PeterA it's on ubuntu 12.04
17:20 JoeJulian An strace might show something.
17:21 PeterA how?
17:21 ndevos CyrilPeponnet: I'm planning to do 3.5.5 tonight, but need someone to review the release notes: http://review.gluster.org/11568
17:22 CyrilPeponnet Good ! I'm also planning an upgrade to 3.6 of our infrastructure
17:22 PeterA what's new on 3.5.5? I just up to 3.5.4….
17:22 ndevos PeterA: it really only calls xfs_info, not even with a path, so having it in $PATH should be sufficient
17:22 jblack joined #gluster
17:23 PeterA then wonder why i keep getting the error
17:23 PeterA on the glusterd.log
17:24 ndevos PeterA: what is the full path for that brick?
17:24 PeterA "/brick03/gfs"
17:24 JoeJulian It doesn't use the path according to glusterd-utils.c. It uses /usr/sbin/xfs_tools or /sbin/xfs_tools
17:24 ndevos PeterA: try passing that dir to xfs_info
17:24 JoeJulian er, xfs_info
17:25 JoeJulian /usr/sbin/$fs_tool_name /sbin/$fs_tool_name
17:25 ndevos PeterA: http://review.gluster.org/#/c​/11568/1/doc/release-notes/3.5.5.md are the changes in 3.5.5, 4 bugs fixed
17:25 PeterA http://pastie.org/10278017
17:26 JoeJulian To use strace, "strace -f glusterd". If it were me, I'd "strace -ff -o /tmp/glustertrace glusterd". That would create a new trace file per pid so you could more easily find the call to xfs_info and the trace of xfs_info itself.
17:27 JoeJulian But I'm also lazy and would probably only do that if something wasn't working. Otherwise I'd just ignore the error. ;)
17:27 PeterA how to stop the strace?
17:27 JoeJulian ctrl-c
17:27 ndevos ah, yes, indeed, /usr/sbin/xfs_info or /sbin/xfs_info : https://github.com/gluster/gluster​fs/blob/release-3.5/xlators/mgmt/g​lusterd/src/glusterd-utils.c#L5084
17:27 PeterA ic....
17:28 PeterA i did a strace -ff -o /tmp/glustertrace glusterd
17:28 PeterA then it just done
17:28 PeterA how do i stop the strace?
17:28 JoeJulian You'd have to stop the other glusterd first.
17:28 PeterA huh?
17:28 JoeJulian You can't have two glusterd running at the same time.
17:29 PeterA oh it would require a downtime?
17:29 PeterA ic
17:29 PeterA hm….can't do that at this point...
17:31 ndevos jblack: btw, if you offer consulting services for Gluster, you could consider getting yourself/company added to http://www.gluster.org/consultants/
17:31 PeterA if i am able to stop the glusterd/brick
17:31 PeterA how do i start up the glusterd with the right brick?
17:31 PeterA for strace?
17:31 ndevos PeterA: you can attach strace to a PID as well
17:32 PeterA oh how?
17:32 PeterA strace -p ?
17:32 jblack ndevos: Thank you! I'll do that some day when I have solid experience with gluster
17:32 ndevos PeterA: yeah, I think so
17:32 JoeJulian yeah, but since it only checks the xfs_info on brick start, that's not going to tell him anything.
17:33 ndevos oh, does it? also in 3.5? I'm not sure about those different versions anymore...
17:34 PeterA i m still getting that xfs_info error very frequently
17:34 ndevos in that case, you can do: strace -p $PID -e exec
17:34 JoeJulian Hmm, Well I could easily be wrong on that.
17:35 dgbaley Ugh, is there supposed to be a gerrit link to get a raw patch so I can apply it?
17:35 PeterA strace: invalid system call `exec'
17:35 ndevos oh, maybe it will retry when xfs_info failed?
17:35 ndevos PeterA: oh, maybe execve, or 'exec*'
17:36 PeterA o execve works
17:36 ndevos dgbaley: I advise you to install the git-review package, then you can: git review -x 11512
17:36 ndevos dgbaley: actually, git review -r origin -x 11512
17:37 jmarley joined #gluster
17:37 ndevos dgbaley: if "origin" points to the Gerrit repository, not a mirror like GitHub
17:37 dgbaley Ah, so there's a gerrit repo? Thanks, never used gerrit before
17:37 ndevos dgbaley: otherwise there is a download link in the upper right corner, it'll give you some commands to copy/paste
17:38 ndevos dgbaley: yeah, "git remote add gerrit http://review.gluster.org/glusterfs" I think
17:39 ndevos dgbaley: if you call the gerrit repository "gerrit", git-review does not need the "-r ..." option ;-)
17:39 dgbaley Ah, I see, they hide the good stuff from you if you're not logged in
17:39 ndevos PeterA: do you get some strace output?
17:39 PeterA waiting
17:39 PeterA seems like getting the xfs_info error every 5 mins
17:40 ndevos ah, ok
17:40 PeterA strace -p $PID -e execve is quiet now
17:40 PeterA i got these when run without -e execve
17:40 PeterA http://pastie.org/10278035
17:41 ndevos yeah, you could drop the "-e execve" to show all system calls, but that would mean all I/O and other funky bits, its rather difficult to read :)
17:42 ndevos PeterA: you can just run the strace without the execve, and then grep for xfs later on
17:42 PeterA may i just grep for xfs_info from strace -p $PID ?
17:43 mckaymatt joined #gluster
17:43 ndevos you could, but maybe you should add a -v option so that the strings get expanded and not chopped off, like "/usr/bin/xf..."
17:51 mpietersen is anyone using 3.7.2?
17:55 mpietersen bueler?
18:03 julim_ joined #gluster
18:06 PeterA hmm….i straced glusterd over the time we got the xfs_info error
18:06 PeterA but couldn't grep any xfs related from the strace output
18:07 jiffin joined #gluster
18:07 ndevos PeterA: are you sure you strace'd the right process?
18:07 PeterA ya
18:08 PeterA glusterd ?
18:08 ndevos PeterA: you should be able to find the PID with 'lsof /path/to/log/with/errors'
18:08 PeterA oic
18:08 PeterA i will look into that
18:08 ndevos maybe glusterd, I dont know :)
18:08 ndevos well, I'd actually would expect a brick process, but you can never be sure ;-)
18:09 woakes070048 joined #gluster
18:10 mpietersen so... no one?
18:10 mpietersen no one here uses gluster for geo-rep.
18:10 ndevos mpietersen: I think people are, but they may not be reading irc constantly :)
18:11 mpietersen finally. i was just curious as no one at all responded
18:11 mpietersen wasn't sure if I was a ghost or not.
18:11 ndevos oh, no, I can see you!
18:11 mpietersen half life 3 confirmed
18:12 mpietersen i have private key auth set up between the master and node, so ssh doesn't seem to be the problem
18:13 mpietersen i'm getting "0-management: Invalid slave name"
18:13 mpietersen it's frusterating because latest available through the epel repo is 3.7.2, but all the documentation is for 3.5
18:13 dgbaley rafi: so after much pain, patching, and packaging, it turns out my problem was that I didn't set the memlock to unlimited
18:13 glusterbot News from newglusterbugs: [Bug 1117888] Problem when enabling quota : Could not start quota auxiliary mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117888>
18:13 glusterbot News from newglusterbugs: [Bug 1134050] Glfs_fini() not freeing the resources <https://bugzilla.redhat.co​m/show_bug.cgi?id=1134050>
18:13 glusterbot News from newglusterbugs: [Bug 1192075] libgfapi clients hang if glfs_fini is called before glfs_init <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192075>
18:14 rafi dgbaley: :)
18:14 rafi dgbaley: I was just testing scenario in my setup
18:15 papamoose2 joined #gluster
18:15 ndevos mpietersen: I have no idea what changed after 3.5 for geo-rep, but mostly those devs keep their docs up to date...
18:15 dgbaley And I broke something else along the way because "yum list inst *gluster*" now only shouls glusterfs.x86_64 even though I know most of glusterfs-* is installed, oh well.
18:15 ndevos dgbaley: try: rpm -qa "glusterfs*"
18:16 rafi dgbaley: because, in rdma we first try, whether there is any privileged port, if it couldn't find, then rdma will try to bind using 0, so any port ;)
18:16 dgbaley rafi: Since you're the RDMA guy.... I have all Mellanox hardware (ethernet NICs, using RoCE). Should I bother with MLNX_OFED, or is the group "Infiniband Support" good enough?
18:16 rafi dgbaley: the patch was sent to address this issue
18:17 rafi dgbaley: group install will be enough to run gluster
18:17 dgbaley ndevos: sorry, you know my system is such a mess right now I've lost track of certain things. "yum list glust*" was behaving different because I now have files that match that in my cwd =/
18:17 dgbaley rafi: cool, that makes my life so much easier -- I just grabbed the mlnx_ofed tgz long enough to flash the firmware
18:17 rafi dgbaley: but MLNX_OFED will have most of the IB related packages and perf utils
18:18 rafi dgbaley: :)
18:18 ndevos dgbaley: yeah, I was expecting you to have some *gluster* files in your CWD :)
18:20 rafi dgbaley: please feel free to ping me or send a mail to gluster-devel ccing me , if you have any problem with rdma
18:20 jmarley joined #gluster
18:20 bennyturns joined #gluster
18:21 wkf joined #gluster
18:21 ramkrsna joined #gluster
18:22 dgbaley rafi: thanks -- I still do actually, qemu is still failing ... but I haven't poked at it long enough yet
18:23 dgbaley librdmacm: Fatal: unable to open /dev/infiniband/rdma_cm, even though I set perms to ugo=rw on /dev/infiniband/* AND I'm running as root
18:24 ndevos CyrilPeponnet: v3.5.5 has been tagged and pushed, tarball is available from http://bits.gluster.org/pub/gluster/​glusterfs/src/glusterfs-3.5.5.tar.gz - RPMs will follow later
18:33 spalai1 left #gluster
18:34 nsoffer joined #gluster
18:34 vimal joined #gluster
18:34 CyrilPeponnet awesome thanks @ndevos
18:37 jdossey joined #gluster
18:44 rafi dgbaley: i haven't played much about qemu
18:46 dgbaley rafi: well, I'm working on trying to do a fuse mount as a normal user -- but that might be suid -- so I don't know how else to test a non-privileged RDMA transport
18:47 rafi dgbaley: i will write share a script
18:48 rafi *write and share
18:48 rafi :)
18:48 rafi dgbaley: but time is running too fast in india, it is 12:30 AM ,
18:49 rafi dgbaley: tomorrow will work for you ?
18:52 dgbaley rafi: Thank you, yes I'll be around.
18:53 rafi dgbaley: mean time, if you have any problem , send a mail to gluster-devel , or file a bug
18:53 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
18:55 * rafi is going for some sweet dreams :)
19:00 ira joined #gluster
19:02 sage joined #gluster
19:02 arthurh joined #gluster
19:06 autoditac joined #gluster
19:07 cyberswat joined #gluster
19:17 side_control joined #gluster
19:21 topshare joined #gluster
19:31 jcastill1 joined #gluster
19:32 anrao joined #gluster
19:37 jcastillo joined #gluster
19:45 arthurh joined #gluster
19:52 calavera joined #gluster
19:53 jrm16020 joined #gluster
19:58 jblack joined #gluster
20:04 ctria joined #gluster
20:27 DV joined #gluster
20:32 jblack joined #gluster
21:02 theron_ joined #gluster
21:08 ctria joined #gluster
21:10 calavera joined #gluster
21:15 cholcombe joined #gluster
21:25 aaronott joined #gluster
21:28 PeterA joined #gluster
21:35 jmarley joined #gluster
21:54 yingw787 joined #gluster
21:55 yingw787 hey guys
21:55 yingw787 is this the glusterfs community
21:55 yingw787 I am trying to mount a volume
21:55 yingw787 but I am getting an error
21:56 yingw787 Mount failed: Please check the log file for more details
22:02 calavera joined #gluster
22:18 NTQ joined #gluster
22:24 badone joined #gluster
22:24 hagarth joined #gluster
22:38 PeterA joined #gluster
22:39 wushudoin| joined #gluster
22:43 hagarth joined #gluster
22:44 wushudoin| joined #gluster
22:58 stickyboy joined #gluster
23:00 TheCthulhu joined #gluster
23:13 ajames-41678 joined #gluster
23:14 doctorray Does anyone have any tips on limiting cpu usage (or other resources) during a rebalance?  I've tried the cgroups approach, giving glusterd and glusterfsd only 256 shares apiece, but it only seems to help a little bit.  Load average still creeps up to higher than cpu count.
23:15 doctorray I'm testing on AWS with c4.large instances with magnetic bricks.  vm is centos 7, using latest gluster
23:17 doctorray four bricks, two nodes, distributed replica..
23:19 Lee- joined #gluster
23:26 gildub joined #gluster
23:32 plarsen joined #gluster
23:34 Lee- joined #gluster
23:40 B21956 joined #gluster
23:50 B21956 joined #gluster
23:55 prg3 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary