Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 shyam joined #gluster
00:22 luizcpg joined #gluster
00:36 fcoelho joined #gluster
00:47 itisravi joined #gluster
00:52 ahino joined #gluster
01:09 kdhananjay joined #gluster
01:09 PaulCuzner joined #gluster
01:22 dlambrig_ joined #gluster
01:23 chirino_m joined #gluster
01:32 Lee1092 joined #gluster
01:36 EinstCrazy joined #gluster
01:40 chirino joined #gluster
01:46 poornimag joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 gbox joined #gluster
01:49 plarsen joined #gluster
02:02 harish joined #gluster
02:04 EinstCrazy joined #gluster
02:19 auzty joined #gluster
02:29 haomaiwang joined #gluster
02:30 haomaiwang joined #gluster
02:31 haomaiwang joined #gluster
02:32 haomaiwang joined #gluster
02:32 kdhananjay1 joined #gluster
02:33 haomaiwang joined #gluster
02:34 haomaiwang joined #gluster
02:35 haomaiwang joined #gluster
02:36 haomaiwang joined #gluster
02:45 andy-b joined #gluster
02:53 EinstCrazy joined #gluster
03:01 haomaiwang joined #gluster
03:30 EinstCrazy joined #gluster
03:40 nbalacha joined #gluster
03:41 EinstCra_ joined #gluster
03:43 itisravi joined #gluster
03:46 atinm joined #gluster
03:47 DV joined #gluster
03:48 harish joined #gluster
03:49 nathwill joined #gluster
03:51 nehar joined #gluster
03:53 RameshN joined #gluster
03:58 unforgiven512 joined #gluster
04:01 haomaiwang joined #gluster
04:03 hagarth joined #gluster
04:08 nehar joined #gluster
04:17 luizcpg joined #gluster
04:21 itisravi joined #gluster
04:23 shubhendu joined #gluster
04:23 davpenguin joined #gluster
04:23 davpenguin hi all
04:23 davpenguin anyone know the utility gfdb for debug the glusterfs???
04:25 sankarshan_away joined #gluster
04:29 sakshi joined #gluster
04:33 beeradb joined #gluster
04:36 gowtham joined #gluster
04:39 gbox joined #gluster
04:44 pdrakeweb joined #gluster
04:47 beeradb joined #gluster
04:48 spalai joined #gluster
04:48 DV joined #gluster
04:50 raghug joined #gluster
04:50 vshankar joined #gluster
04:54 EinstCrazy joined #gluster
04:55 Apeksha joined #gluster
04:55 nathwill joined #gluster
04:59 nishanth joined #gluster
05:01 haomaiwang joined #gluster
05:01 kotreshhr joined #gluster
05:03 gem joined #gluster
05:04 EinstCrazy joined #gluster
05:06 Bhaskarakiran joined #gluster
05:07 ndarshan joined #gluster
05:09 gbox joined #gluster
05:10 pdrakeweb joined #gluster
05:16 jiffin joined #gluster
05:18 kotreshhr joined #gluster
05:19 aravindavk joined #gluster
05:23 prasanth joined #gluster
05:23 poornimag joined #gluster
05:24 nbalachandran_ joined #gluster
05:29 aspandey joined #gluster
05:30 haomaiwang joined #gluster
05:32 aravindavk joined #gluster
05:35 kdhananjay1 Ulrar: sorry, i haven't had the chance to look into your issues. i will get back to you soon on this, hopefully early next week.
05:37 auzty joined #gluster
05:38 Saravanakmr joined #gluster
05:42 hgowtham joined #gluster
05:44 ppai joined #gluster
05:44 atinm joined #gluster
05:45 skoduri joined #gluster
05:46 rastar joined #gluster
05:51 DV joined #gluster
05:54 pdrakeweb joined #gluster
06:01 haomaiwang joined #gluster
06:01 anil joined #gluster
06:04 auzty joined #gluster
06:06 EinstCrazy joined #gluster
06:13 kdhananjay joined #gluster
06:17 aspandey joined #gluster
06:18 jtux joined #gluster
06:18 RameshN joined #gluster
06:20 spalai joined #gluster
06:20 jiffin1 joined #gluster
06:24 muneerse2 joined #gluster
06:28 nage joined #gluster
06:29 Dave____ joined #gluster
06:33 kotreshhr joined #gluster
06:34 Ulrar Allright thanks
06:34 Ulrar kdhananjay: I'll try to start the VM by hand and reproduce today if I have some time, as suggested by Lindsay
06:34 Ulrar See if the logs are any help
06:35 karnan joined #gluster
06:36 Ulrar Still think it might just be hardware since I seem to be the only one with that problem
06:36 auzty joined #gluster
06:37 primusinterpares joined #gluster
06:40 EinstCrazy joined #gluster
06:40 nishanth joined #gluster
06:46 poornimag joined #gluster
06:48 EinstCrazy joined #gluster
06:49 gbox joined #gluster
06:54 aravindavk joined #gluster
06:59 aspandey joined #gluster
07:00 spalai joined #gluster
07:01 haomaiwang joined #gluster
07:04 EinstCra_ joined #gluster
07:08 kdhananjay joined #gluster
07:10 harish joined #gluster
07:19 nbalachandran_ joined #gluster
07:24 fsimonce joined #gluster
07:28 kshlm joined #gluster
07:31 harish joined #gluster
07:33 yoavz joined #gluster
07:33 kdhananjay joined #gluster
07:35 ivan_rossi joined #gluster
07:37 rastar joined #gluster
07:39 atinm joined #gluster
07:39 jiffin1 joined #gluster
07:49 kovshenin joined #gluster
07:53 poornimag joined #gluster
07:57 hackman joined #gluster
08:01 haomaiwang joined #gluster
08:04 DV joined #gluster
08:10 hagarth joined #gluster
08:14 hchiramm joined #gluster
08:22 harish joined #gluster
08:37 Slashman joined #gluster
08:39 sayid joined #gluster
08:43 sayid hi all, question...Is it useful to create gluster volume on machines with one hard disc?
08:44 sayid for example I have this: 1 physical machine with 7 nodes. Only 1 SSD per node.
08:44 sayid and now I have shared storage on node #1 that is shared across all other nodes using NFS
08:45 sayid is it handy to create distributed gluster volume across all nodes? With keeping in mind there is only 1 SSD per node...
08:45 chirino joined #gluster
08:47 gowtham joined #gluster
08:57 jiffin sayid: there is no specific requirement for glusterfs on no of disc per node
08:58 aravindavk joined #gluster
08:59 aravindavk joined #gluster
09:00 jiffin joined #gluster
09:01 haomaiwang joined #gluster
09:11 auzty joined #gluster
09:25 spartapapate joined #gluster
09:26 spartapapate Hey guys, setting up gluster for rep between two application servers and I have a small problem
09:27 spartapapate i setup a mount unit in systemd for the volume, but since the servers and clients are on the same machines (which i realize, is not an optimal setup), even when putting after=networking after=glusterfs-server, it ends up trying to mount before the volume is actually started
09:27 spartapapate at boot time
09:27 spartapapate any idea how i can fix this :/ ?
09:29 spalai joined #gluster
09:31 mrEriksson Yeah, I've seen that too on Suse
09:32 mrEriksson I added the volumes to fstab as noauto, and then ran a script late in the startup process that did another run trough the fstab and did a mount on all gluster volumes
09:32 spartapapate ugh, that's kind of ugly though, tbh
09:33 spartapapate hmm
09:33 mrEriksson It is
09:33 mrEriksson Thougu, it was the best solution I could come up with
09:34 mrEriksson Better than hard coding the mounts later in the startup process
09:34 spartapapate i guess
09:35 shubhendu joined #gluster
09:36 spartapapate could replace the mount unit by a script that polls checking wether the processes are really running but... that's a bit like going back to stone age
09:36 luizcpg joined #gluster
09:56 Debloper joined #gluster
09:59 amye joined #gluster
10:01 haomaiwang joined #gluster
10:29 robb_nl joined #gluster
10:34 pdrakeweb joined #gluster
10:36 raghug joined #gluster
10:45 rastar joined #gluster
10:46 atinm joined #gluster
10:47 skoduri joined #gluster
10:49 gbox joined #gluster
10:52 amye joined #gluster
10:53 gvandeweyer joined #gluster
10:53 aravindavk joined #gluster
10:56 johnmilton joined #gluster
11:01 haomaiwang joined #gluster
11:08 chirino joined #gluster
11:13 luizcpg joined #gluster
11:25 raghug joined #gluster
11:27 gbox joined #gluster
11:28 csaba joined #gluster
11:30 ira joined #gluster
11:38 arcolife joined #gluster
11:40 skoduri joined #gluster
11:51 KenM JoeJulian: I did not rebalance.  I created a new volume so I didn't think I needed to.  I will try this, thanks!  I suspected it was a newbie mistake ;)
11:55 cyberbootje joined #gluster
11:58 anil joined #gluster
12:01 chirino joined #gluster
12:02 julim joined #gluster
12:02 gvandeweyer does gluster have a 'head node'? once you have set up gluster in a 3-server distributed/replicated setup (3*2), is it expected that the server from which you configured gluster has a significantly higher load ? more bandwidth usage, more CPU load (continuous load of ~10, up to 30, only running gluster).
12:03 atinm joined #gluster
12:03 hi11111 joined #gluster
12:03 gvandeweyer the second server has identical brick setup/sizes, but is slightly lower in CPU/RAM specs. nevertheless, it has only a load of ~1, while server 1 is grinding almost to a halt.
12:12 jiffin gvandeweyer: Nope
12:13 jiffin gvandeweyer: did u mount the client process on same server?
12:16 Guest76930 joined #gluster
12:18 gvandeweyer jiffin: I did
12:18 gvandeweyer jiffin: I unmounted it but load is still quite high
12:20 jiffin gvandeweyer: are running any glusterfs internal process like self-heal or rebalance?
12:22 gowtham joined #gluster
12:22 gvandeweyer hmm, how to check self-heal?
12:33 spalai left #gluster
12:39 bluenemo joined #gluster
12:57 julim joined #gluster
12:59 pdrakewe_ joined #gluster
13:04 plarsen joined #gluster
13:07 pdrakeweb joined #gluster
13:11 spalai joined #gluster
13:12 spalai left #gluster
13:12 pdrakewe_ joined #gluster
13:17 DV joined #gluster
13:20 EinstCrazy joined #gluster
13:27 nbalachandran_ joined #gluster
13:29 EinstCrazy joined #gluster
13:31 F2Knight joined #gluster
13:48 EinstCrazy joined #gluster
13:52 kotreshhr joined #gluster
13:53 shyam joined #gluster
13:54 pdrakeweb joined #gluster
13:59 TvL2386 joined #gluster
14:06 skylar joined #gluster
14:06 rwheeler joined #gluster
14:11 nbalacha joined #gluster
14:16 alvinstarr I have 2 peers. A points to B but B points to a wrongly spelled host. How badly is this screwing things up?
14:18 alvinstarr I am hoping I can set up a new peer and delete the mistaken one without many side effects.
14:18 nehar joined #gluster
14:20 gbox joined #gluster
14:26 pdrakewe_ joined #gluster
14:34 ivan_rossi left #gluster
14:41 atinm joined #gluster
14:48 alvinstarr A second question. To upgrade systems can I take one peer offline and then upgrade it then put it back online and do the same to the second peer?
14:59 haomaiwang joined #gluster
14:59 haomaiwang joined #gluster
15:00 kpease joined #gluster
15:01 haomaiwang joined #gluster
15:02 ivan_rossi joined #gluster
15:10 gem_ joined #gluster
15:16 jiffin joined #gluster
15:19 shyam1 joined #gluster
15:21 m0zes joined #gluster
16:01 haomaiwang joined #gluster
16:05 poornimag joined #gluster
16:07 karnan joined #gluster
16:09 ahino joined #gluster
16:10 shaunm joined #gluster
16:12 akay joined #gluster
16:12 wushudoin joined #gluster
16:13 wushudoin joined #gluster
16:17 Debloper joined #gluster
16:20 gbox joined #gluster
16:25 level7 joined #gluster
16:26 DV joined #gluster
16:42 shubhendu joined #gluster
16:54 hackman joined #gluster
16:58 spalai joined #gluster
17:07 mpietersen joined #gluster
17:10 haomaiwang joined #gluster
17:32 ehermes joined #gluster
17:33 ehermes Should the userland glusterfs process be given a low (<0) nice value?
17:39 chirino_m joined #gluster
17:42 squizzi_ joined #gluster
17:48 mpietersen joined #gluster
17:50 unclemarc joined #gluster
17:55 shyam joined #gluster
17:57 nathwill joined #gluster
18:07 mowntan joined #gluster
18:18 mpietersen joined #gluster
18:40 pdrakeweb joined #gluster
18:57 pdrakeweb joined #gluster
19:03 post-factum ehermes: no, why?
19:09 mpietersen joined #gluster
19:11 chirino joined #gluster
19:15 ehermes I was just wondering.
19:15 ehermes I'm just a user of a cluster that uses gluster
19:15 ehermes and it constantly has issues with I/O being absurdly slow and nodes getting stale file handles and such
19:16 ehermes the glusterfs userland process is constantly pegging a core on the head nodes
19:16 ehermes another user was compiling software in a... less than intelligent manner, and the head node loadavg passed 300
19:17 ehermes I don't necessarily expect the system to be *completely* responsive under those conditions but it seemed strange to me that glusterfs has to compete with other userland processes for cpu time
19:18 F2Knight joined #gluster
19:20 julim joined #gluster
19:28 post-factum ehermes: why glusterfs process uses much cpu time for you?
19:39 JoeJulian I'm betting on self-heal
19:40 JoeJulian ehermes: try "gluster volume set $volname cluster.data-self-heal off"
19:42 ehermes as I said, I'm just a user
19:43 ehermes I suspect the sysadmin is not the most experienced though
19:43 JoeJulian Well, point him our direction. We're here to help.
19:43 ehermes "Hey, I think you might need some help managing this cluster, you should ask these guys on IRC"
19:43 ehermes sorry, didn't mean to be glib
19:44 ehermes but I don't see how I could say that in a way that wouldn't be rude
19:44 JoeJulian (Though not for much longer. It's Friday and I don't really feel much like working anymore)
19:44 unforgiven512 joined #gluster
19:45 unforgiven512 joined #gluster
19:46 JoeJulian "I've been having some trouble with this gluster installation not meeting my performance expectations. I was talking with Joe Julian on IRC and he mentioned there's been a problem, recently, with client-side self-heals and suggested we try disabling them. I, obviously, don't know what the ramifications would be if we did that, but he's got a lot of experience with Gluster. Would you be willing to try that? He also seemed more than willing to offer
19:46 JoeJulian his experience on IRC if you'd like to talk with him."
19:47 JoeJulian @lucky ben franklin effect
19:47 glusterbot JoeJulian: Error: The Google Web Search API is no longer available. Please migrate to the Google Custom Search API (https://developers.google.com/custom-search/)
19:47 JoeJulian Nooooooooooooooo!
19:47 JoeJulian Well, until I fix that, google the Ben Franlin Effect and learn to use it. It's a powerful psychological tool that's invaluable in the workplace.
19:49 ehermes fwiw our cluster is running old stuff all around, glusterfs 3.5.5
19:49 ehermes but yeah if you think there's something actually configured wrong I'll shoot an email mentioning that I heard something over the grapevine
19:49 JoeJulian Ok, well then scratch that about client-side self-heal.
19:51 JoeJulian fwiw, compiling is probably one of the worst use cases that exposes the latency inherent in a consistent clustered filesystem.
19:51 ehermes I compile most of my stuff on local storage
19:51 ehermes I dunno about the other users
19:56 kotreshhr joined #gluster
19:56 JoeJulian I don't know if you're in a position to recommend it, but if you all implemented dcache, you could share compiler cache among the group, potentially saving you all a significant amount of time.
19:56 ehermes this is a campus-wide scientific compute cluster
19:57 ehermes people are generally compiling different things
19:58 JoeJulian What campus?
19:59 level7 joined #gluster
20:00 ehermes JoeJulian: uw madison
20:00 ehermes University of Wisconsin
20:03 post-factum those admins really should consider upgrading old software
20:04 ehermes I believe we templated this cluster off the UT Austin Stampede cluster
20:04 ehermes which is the only possible explanation for installing CentOS 6 in 2013
20:05 ehermes ok, I guess CentOS 7 was released in 2014
20:06 kotreshhr left #gluster
20:08 post-factum newer gluster versions are available for 6 as well
20:11 DV joined #gluster
20:20 skylar joined #gluster
20:23 chirino joined #gluster
20:31 gbox joined #gluster
20:55 level7 joined #gluster
21:06 wushudoin joined #gluster
21:12 level7 joined #gluster
21:21 level7 joined #gluster
21:31 ecoreply_ joined #gluster
21:39 haomaiwang joined #gluster
21:48 m0zes joined #gluster
21:48 johnmilton joined #gluster
21:55 F2Knight joined #gluster
22:05 johnmilton joined #gluster
22:07 DV joined #gluster
22:19 shaunm joined #gluster
22:27 chirino joined #gluster
22:31 gbox joined #gluster
22:35 mowntan joined #gluster
22:39 F2Knight joined #gluster
23:17 Biopandemic joined #gluster
23:21 F2Knight joined #gluster
23:23 hackman joined #gluster
23:27 haomaiwang joined #gluster
23:41 level7 joined #gluster
23:52 level7 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary