Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 kramdoss_ joined #gluster
00:34 arpu joined #gluster
00:43 ahino joined #gluster
01:01 shdeng joined #gluster
01:15 major joined #gluster
01:22 msvbhat joined #gluster
02:07 gem joined #gluster
02:14 prasanth joined #gluster
02:20 squeakyneb joined #gluster
02:22 derjohn_mob joined #gluster
02:32 moneylotion joined #gluster
02:34 ankitr joined #gluster
02:48 rossdm joined #gluster
03:00 kramdoss_ joined #gluster
03:01 major_ joined #gluster
03:03 Gambit15 joined #gluster
03:15 magrawal joined #gluster
03:57 dominicpg joined #gluster
04:00 bwerthmann joined #gluster
04:02 gyadav__ joined #gluster
04:07 bwerthmann joined #gluster
04:09 riyas joined #gluster
04:15 buvanesh_kumar joined #gluster
04:21 ppai joined #gluster
04:26 sanoj joined #gluster
04:29 RameshN joined #gluster
04:42 skumar joined #gluster
04:44 ankitr joined #gluster
04:46 StormTide joined #gluster
04:49 ankitr joined #gluster
04:50 nishanth joined #gluster
04:50 kdhananjay joined #gluster
04:51 Shu6h3ndu joined #gluster
04:59 karthik_us joined #gluster
05:05 ankush joined #gluster
05:17 rafi joined #gluster
05:21 itisravi joined #gluster
05:25 itisravi joined #gluster
05:27 karthik_ joined #gluster
05:27 jiffin joined #gluster
05:32 kotreshhr joined #gluster
05:32 ndarshan joined #gluster
05:34 Saravanakmr joined #gluster
05:35 tamal joined #gluster
05:35 Karan joined #gluster
05:36 tamal Hello, I have question about gluster volume. Is this the right forum to ask it?
05:37 tamal I posted the question http://lists.gluster.org/pipermail/gl​uster-users/2017-February/030121.html
05:37 glusterbot Title: [Gluster-users] How many clients can mount a single volume? (at lists.gluster.org)
05:37 atm0sphere tamal, yes shoot your questions :)
05:38 tamal I am running a GlusterFS cluster in Kubernetes. This has a single 1x3 volume. But this volume is mounted by around 30 other docker containers. Basically each docker container represents a separate "user" in our multi-tenant application. As a result there is no conflicting writes among the "user"s. Each user writes to their own folder in the volume.  My question is how many clients can mount a GlusterFS volume before it becomes a perf
05:41 apandey joined #gluster
05:41 Prasad joined #gluster
05:42 riyas joined #gluster
05:47 itisravi_ joined #gluster
05:47 tamal @atm0sphere, do you know?
05:51 skoduri joined #gluster
05:56 jiffin1 joined #gluster
05:58 karthik_ joined #gluster
06:00 Shu6h3ndu joined #gluster
06:08 kdhananjay joined #gluster
06:12 jiffin1 joined #gluster
06:12 k4n0 joined #gluster
06:13 msvbhat joined #gluster
06:14 ashiq joined #gluster
06:14 kdhananjay1 joined #gluster
06:16 kdhananjay1 joined #gluster
06:17 ppai joined #gluster
06:26 ahino joined #gluster
06:28 Shu6h3ndu joined #gluster
06:31 apandey joined #gluster
06:31 rafi joined #gluster
06:32 Philambdo joined #gluster
06:33 shdeng joined #gluster
06:44 sona joined #gluster
06:50 ppai joined #gluster
06:50 rafi1 joined #gluster
06:58 sbulage joined #gluster
07:02 sona joined #gluster
07:09 apandey joined #gluster
07:10 nishanth joined #gluster
07:18 hgowtham joined #gluster
07:22 skoduri joined #gluster
07:26 jtux joined #gluster
07:27 skumar_ joined #gluster
07:30 derjohn_mob joined #gluster
07:37 mbukatov joined #gluster
07:42 mlg9000 joined #gluster
07:47 ivan_rossi joined #gluster
07:52 apandey joined #gluster
07:54 rafi1 joined #gluster
07:56 skoduri joined #gluster
08:10 jiffin1 joined #gluster
08:22 mhulsman joined #gluster
08:46 flying joined #gluster
08:46 Seth_Karlo joined #gluster
08:51 Seth_Karlo joined #gluster
08:52 ankush joined #gluster
08:57 [diablo] joined #gluster
09:01 percevalbot joined #gluster
09:11 jiffin joined #gluster
09:13 aardbolreiziger joined #gluster
09:16 k4n0 joined #gluster
09:19 level7_ joined #gluster
09:20 edvin joined #gluster
09:20 BatS9 Good morning!
09:21 BatS9 I'm encountering a bug in NFS-Ganesha which causes it to crash, it seems to be the same as shown in https://bugzilla.redhat.co​m/show_bug.cgi?id=1428798
09:22 glusterbot Bug 1428798: unspecified, urgent, ---, dang, VERIFIED , [GANESHA] I/O error while performing renaming and lookups from 4 clients
09:22 BatS9 Is there a workaround or/and a known ETA for the fix to go into ppa?
09:22 ashiq joined #gluster
09:25 jiffin BatS9: patch already merged upstream
09:25 jiffin on master branch
09:26 jiffin BatS9: IMO it will be part of next stable release 2.4.4(probably happen by end of next week)
09:28 BatS9 jiffin: Thank you :)
09:28 BatS9 jiffin+
09:43 ankush joined #gluster
09:54 kdhananjay joined #gluster
10:01 BatS9_ joined #gluster
10:06 MrAbaddon joined #gluster
10:14 foster joined #gluster
10:15 Ulrar So, I have a gluster using up a LOT of ram right now. /etc/init.d/glusterfs stop doesn't seem to stop the glusterfsd that is using the ram
10:16 Ulrar Is there a way to force it ? I've read I could kill it, but for some reason the idea of killing the glusterfsd process is stressing me
10:16 ShwethaHP joined #gluster
10:17 skoduri joined #gluster
10:24 Gambit15 joined #gluster
10:26 karthik_us joined #gluster
10:26 rafi1 joined #gluster
10:29 ahino joined #gluster
10:50 rafi1 joined #gluster
10:53 kdhananjay joined #gluster
10:57 msvbhat joined #gluster
10:59 ndevos Ulrar: can you get the whole commandline of the process in question?
10:59 ndevos Ulrar: some processes (like self-heal, gluster/nfs, ...) are started by glusterd and not stopped by the glusterfsd service scripts (only for bricks)
11:05 Ulrar ndevos: Too late, I rebooted the machine
11:05 Ulrar Thanks anyway :)
11:19 RameshN joined #gluster
11:20 BatS9 joined #gluster
11:33 Julian joined #gluster
11:36 BatS9_ joined #gluster
11:37 Guest16829 Hi there. Never used IRC to get support... so I hope this is right ;)
11:38 Guest16829 I'm using GlusterFS in a 4-node (SSD) kubernetes cluster and I am seeing terrible read+write performance on all glusterfs volumes (around 10mbs).
11:39 Guest16829 I've already tried to tune some parameters without any significant results.
11:40 d0nn1e joined #gluster
11:43 Guest16829 So my question would be, are there any miraculous settings/parameters I should tune and which performance is there to expect? Kubernetes is running in a blade-center with SSD hosts, so fast network, fast IO. Anything else like network-latency and all that stuff is just fine, the only thing which is really slow are the gluster volumes.
11:46 level7 joined #gluster
11:53 ashiq joined #gluster
11:53 Guest16829 left #gluster
11:54 armyriad joined #gluster
11:55 sona joined #gluster
11:57 shaunm joined #gluster
12:00 msvbhat joined #gluster
12:18 RameshN joined #gluster
12:23 sona joined #gluster
12:23 kpease joined #gluster
12:34 skumar_ joined #gluster
12:38 ashiq joined #gluster
12:38 JonathanD joined #gluster
12:42 Philambdo joined #gluster
12:42 jiffin joined #gluster
12:44 ira joined #gluster
12:51 ahino joined #gluster
12:56 unclemarc joined #gluster
12:59 bfoster joined #gluster
13:20 kotreshhr left #gluster
13:20 msvbhat joined #gluster
13:32 jiffin joined #gluster
13:33 ahino joined #gluster
13:35 Saravanakmr joined #gluster
13:47 sbulage joined #gluster
13:48 Klas georeplication is not exactly lightning-quick, is it =P?
13:49 Klas currently, we are transferring about 110 MB/minute
13:51 cloph only having a 100MBit link here for geo-replication, and geo-rep can saturate that.. But as always tons of small files are slower than a few bigger ones..
13:55 bwerthmann joined #gluster
13:55 gem joined #gluster
13:58 squizzi joined #gluster
13:59 Seth_Karlo joined #gluster
14:03 Seth_Karlo joined #gluster
14:06 rwheeler joined #gluster
14:07 kdhananjay joined #gluster
14:08 bfoster joined #gluster
14:11 sonal joined #gluster
14:12 skoduri joined #gluster
14:12 gyadav joined #gluster
14:20 ahino joined #gluster
14:25 Humble joined #gluster
14:34 saybeano joined #gluster
14:36 ashiq joined #gluster
14:40 bwerthma1n joined #gluster
14:45 PTech joined #gluster
14:46 farhorizon joined #gluster
14:46 bwerthma1n joined #gluster
14:46 shyam joined #gluster
14:47 Klas cloph: definitely a case of a ton of small files, trye
14:47 Klas *true
14:50 cloph you might be able to get better results by bumping sync_jobs or switching to tarssh mode if you didn't already try those
14:50 bwerthma1n joined #gluster
14:51 StormTide joined #gluster
14:53 bwerthma1n joined #gluster
14:53 StormTide morning. last night had a client lose network connectivity for about 15 mins. this caused ping timeout of all bricks and when the network returned gluster didnt reconnect. is there some sort of configuration for client reconnection after network timeout that im missing?
14:54 bwerthma1n joined #gluster
14:55 StormTide it also seemed to give up its mount point rather than just giving an i/o error which caused files to be written into the mount path that should have been written to gluster
14:56 StormTide this is on the 3.10 release
14:57 StormTide the error message that logged was "All subvolumes are down. Going offline until atleast one of them comes back up." but it never came back even when the network was reachable again...
14:57 major anyone familiar with snapshot restore and the expected behavior?
15:01 kshlm Community meeting starting now in #gluster-meeting
15:01 rwheeler joined #gluster
15:02 mhulsman joined #gluster
15:04 bwerthmann joined #gluster
15:07 msvbhat joined #gluster
15:08 rwheeler joined #gluster
15:15 Wizek_ joined #gluster
15:17 vbellur joined #gluster
15:17 plarsen joined #gluster
15:25 wushudoin joined #gluster
15:34 msvbhat joined #gluster
15:35 Shu6h3ndu joined #gluster
15:39 snehring joined #gluster
15:40 misc joined #gluster
15:42 rafi pkalever is doing a demo on block storage
15:42 rafi interested people can join through https://bluejeans.com/102845329/
15:43 shaunm joined #gluster
15:45 kshlm joined #gluster
15:45 xiu joined #gluster
15:48 samikshan joined #gluster
16:00 riyas joined #gluster
16:06 nega joined #gluster
16:06 ahino joined #gluster
16:07 nega is it safe to upgrade guster from v3.3.1 to v3.10?
16:10 farhorizon joined #gluster
16:18 jwd joined #gluster
16:25 cvstealt1 joined #gluster
16:28 Gambit15 joined #gluster
16:30 nobody481 joined #gluster
16:30 renout_away joined #gluster
16:31 mlg9000 joined #gluster
16:32 level7 joined #gluster
16:41 gem joined #gluster
16:42 ashiq joined #gluster
16:45 Seth_Karlo joined #gluster
17:04 major joined #gluster
17:13 buvanesh_kumar joined #gluster
17:18 oajs joined #gluster
17:19 vbellur joined #gluster
17:24 ivan_rossi left #gluster
17:26 major soo .. regarding snapshot restoration .. lets say I am currently running from snapshot/volume "A", and I restore snapshot "B" .. is there no way to go back to "A" ?
17:33 ahino joined #gluster
17:34 k4n0 joined #gluster
17:52 [diablo] joined #gluster
17:56 farhorizon joined #gluster
18:01 farhorizon joined #gluster
18:06 major heh .. does no one restore snapshots? :)
18:17 gem joined #gluster
18:21 rafi joined #gluster
18:27 unclemarc joined #gluster
18:29 mhulsman joined #gluster
18:35 shyam joined #gluster
18:45 dayne joined #gluster
18:46 farhorizon joined #gluster
18:46 riyas joined #gluster
18:57 farhorizon joined #gluster
18:57 jwd joined #gluster
19:03 rafi joined #gluster
19:03 oajs joined #gluster
19:04 rastar joined #gluster
19:05 dayne i've got a gluster node that after a reboot has been problmatic re-joining cluster. To get it happy we did a /var/lib/glusterd (move to side, put the glusterd.info back in place, restart glusterd). That got the pod2 back into being considered a peer in the cluster by pods3-6.  However, it has no volume definitions.  having a bugger of a time figuring out a safe path forward to troubleshoot what might be the
19:05 dayne issue.
19:09 vbellur joined #gluster
19:27 major think I am gonna avoid touching any of the restoration code until I understand what the expected outcome is.. gonna go back to working on snapshot details and the subvol-prefix
19:29 dayne pod2 (gluster node w/o any volumes): gluster volume sync pod5 all # results in no volumes being populated on pod2.
19:30 dayne gluster volume sync (specific volume) # again doesn't bring the definition of the volume over to pod2.  Am I missinterpreting how gluster volume sync is supposed to work?
19:33 JoeJulian dayne: did you put back /var/lib/glusterd/peers/* also?
19:34 JoeJulian If not, a probing may be in order.
19:34 dayne @JoeJulian: yes, caught that the hard way.. took a bit to get it happy with being a fully functioning peer member.
19:35 dayne right now all 5 nodes feel each other is in state Connected and Peer in Cluster with each other.
19:37 JoeJulian In most cases, once the peering is reestablished it usually syncs the vols automatically. If not, sync has always worked for me. If it doesn't, just rsync /var/lib/glusterd/vols from a good box.
19:37 dayne my concern is that pod2 is a slightly newer version of gluster (3.7.10-1) vs pod3-6 (3.7.6-1)
19:38 dayne we were just getting to that thought an rsync was a reasonable next step
19:38 dayne ohh, that version on pod2 is actually 3.7.20-1 (not 10-1)
19:38 dayne going to give the rsync a shot -- thanks for making the idea not seem to far fetched as a reasonable next step.
19:41 Seth_Karlo joined #gluster
19:42 dayne phew - that worked - got a pile of deactivated bricks now .. but progress!
19:42 JoeJulian +1
19:44 major okay .. soo .. added a more robust mount test for brickinfo->path when creating a volume, so now btrfs subvolumes that are NOT really mount points can be used
19:44 major of all the things about btrfs .. that particular "feature" is the most annoying...
19:45 vbellur joined #gluster
19:47 gem joined #gluster
19:48 mhulsman joined #gluster
19:49 mhulsman joined #gluster
19:49 MrAbaddon joined #gluster
19:51 oajs joined #gluster
20:02 edong23 joined #gluster
20:05 oajs joined #gluster
20:08 anoopcs joined #gluster
20:11 jkroon joined #gluster
20:24 oajs joined #gluster
20:29 vbellur1 joined #gluster
20:33 MadPsy left #gluster
20:34 farhorizon joined #gluster
20:42 anoopcs joined #gluster
20:53 farhorizon joined #gluster
20:53 derjohn_mob joined #gluster
20:53 shyam joined #gluster
21:03 major And some meetings make you wonder if maybe you should be looking for a new employer...
21:12 farhorizon joined #gluster
21:18 farhoriz_ joined #gluster
21:36 pioto joined #gluster
21:46 major joined #gluster
21:46 yalu_ joined #gluster
21:48 shyam joined #gluster
22:21 major damn.. I have to rewire one of my CAT5 runs .. getting crosstalk
22:21 major grrr
22:23 MrAbaddon joined #gluster
22:41 yalu joined #gluster
22:46 vbellur joined #gluster
22:52 fcoelho joined #gluster
22:56 fcoelho joined #gluster
23:03 msvbhat joined #gluster
23:33 Gambit15 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary