Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:43 jeffspeff joined #gluster
00:58 omie88877777 joined #gluster
01:20 csaba joined #gluster
01:23 baber joined #gluster
01:31 susant joined #gluster
01:56 ilbot3 joined #gluster
01:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:43 hmamtora joined #gluster
02:44 blu_ joined #gluster
03:20 aravindavk joined #gluster
03:31 wushudoin joined #gluster
03:38 ppai joined #gluster
03:58 psony joined #gluster
04:01 itisravi joined #gluster
04:06 purpleidea joined #gluster
04:06 kramdoss_ joined #gluster
04:27 nbalacha joined #gluster
04:30 jkroon joined #gluster
04:50 sunnyk joined #gluster
05:04 sanoj joined #gluster
05:09 JPaul joined #gluster
05:11 gyadav joined #gluster
05:13 shdeng joined #gluster
05:14 ndarshan joined #gluster
05:23 mbukatov joined #gluster
05:26 skumar joined #gluster
05:28 karthik_us joined #gluster
05:31 hgowtham joined #gluster
05:37 msvbhat joined #gluster
05:38 xavih joined #gluster
05:42 mattmcc joined #gluster
05:44 jiffin joined #gluster
05:58 Saravanakmr joined #gluster
06:08 Manikandan_ joined #gluster
06:08 Manikandan joined #gluster
06:28 Humble joined #gluster
06:29 apandey joined #gluster
06:40 kdhananjay joined #gluster
06:40 ppai joined #gluster
06:48 skoduri joined #gluster
06:53 nicktick joined #gluster
06:55 bEsTiAn joined #gluster
06:56 nicktick What's the max number of files can  GlusterFS  handle ?
07:02 Humble joined #gluster
07:04 susant joined #gluster
07:08 mattmcc joined #gluster
07:12 msvbhat joined #gluster
07:13 fsimonce joined #gluster
07:15 ThHirsch joined #gluster
07:16 ivan_rossi joined #gluster
07:17 ppai joined #gluster
07:19 nicktick Is there any way to use GlusterFS as a key-value repository ?
07:26 ivan_rossi nicktick: as any other file system. the point is : why do you want a file system to do the job of a database?
07:27 nicktick tradional DB
07:27 nicktick traditional DB can't handle billions of objects.
07:40 msvbhat_ joined #gluster
07:40 apandey_ joined #gluster
07:42 apandey__ joined #gluster
07:43 nicktick joined #gluster
07:50 apandey_ joined #gluster
07:58 kdhananjay joined #gluster
08:00 ron-slc joined #gluster
08:01 nbalacha joined #gluster
08:15 rafi joined #gluster
08:16 apandey__ joined #gluster
08:19 p joined #gluster
08:22 Guest37667 Hi, I have a gluster cluster with three bricks , each on separate node. I am trying to create a snapshot for gluster volume but it fails
08:22 Guest37667 error is - snapshot create: failed: Commit failed on mynode2. Please check log file for details. Commit failed on mynode3. Please check log file for details. Snapshot command failed
08:22 Guest37667 any one can help?
08:24 Guest37667 anyone there?
08:30 kdhananjay joined #gluster
08:32 buvanesh_kumar joined #gluster
08:43 dkossako joined #gluster
08:44 dkossako Hello all, I'm trying to heal data on my node (just replaced and it's empty), so I run 'gluster volume heal name full' with output: 'Launching heal operation to perform full self heal on volume name has been unsuccessful'
08:45 dkossako In log: '0-glusterfs: Couldn't get xlator xl-0'
08:45 dkossako Any idea what's going wrong?
08:45 dkossako glusterfs 3.7.6, from Ubuntu repo
08:45 sunnyk Hi Guest37667 : For using snapshot feature, all bricks should be created from an independent thinly provisioned logical volume.
08:46 Guest37667 @sunnyk - yes it is thinly provisioned only
08:47 Guest37667 sunnyk - is there anything else we can check?
08:47 sunnyk @Guest37667, seeing log will help
08:48 Guest37667 @ sunnyk - sure , I checked glusterd.log and cli.log but it does not give any helpful information
08:49 Guest37667 0-management: Post Validation failed for operation Snapshot on local node
08:49 Guest37667 this is what glusterd.log says
08:49 Guest37667 0-management: Commit failed on peers
08:51 susant joined #gluster
08:54 MrAbaddon joined #gluster
08:58 sunnyk Guest37667, can you share log with me sunkumar@redhat.com
08:58 Guest37667 @ sunnyk - thanks , I am sending you
08:59 sunnyk Guest37667, welcome
09:02 kotreshhr joined #gluster
09:07 Guest37667 @sunnyk - you wil get mail from priyanka4openshift@gmail.com
09:08 DoubleJ joined #gluster
09:20 itisravi joined #gluster
09:20 Guest37667 @ sunnyk - i have send mail with subject "gluster logs" , please check
09:22 _KaszpiR_ joined #gluster
09:25 jiffin1 joined #gluster
09:47 kotreshhr left #gluster
09:49 msvbhat joined #gluster
10:08 sunnyk Guest37667, please share complete log
10:08 Guest37667 @sunnyk - ok , sharing
10:17 jiffin joined #gluster
10:18 Guest37667 @sunnyk - sent , please check
10:21 shyam joined #gluster
10:26 msvbhat joined #gluster
10:27 sunnyk @Guest37667, can you also share cmd_history.log
10:36 ivan_rossi left #gluster
10:36 poornima_ joined #gluster
10:43 nicktick joined #gluster
10:47 Shu6h3ndu joined #gluster
10:51 Guest37667 Hi sunnyk - did you get the chance to look at the logs?
10:52 skoduri joined #gluster
10:55 sunnyk Guest37667, yes and I need cmd_history.log too
10:56 Guest37667 oh ok , sending that too
11:01 Guest37667 @sunnyk -sent
11:01 prasanth joined #gluster
11:17 Wizek_ joined #gluster
11:19 skoduri joined #gluster
11:32 Guest37667 sunnyk - I have replied to your email
11:38 bEsTiAn joined #gluster
11:41 skoduri jiffin++ kkeithley++
11:41 glusterbot skoduri: jiffin's karma is now 7
11:41 glusterbot skoduri: kkeithley's karma is now 34
11:48 ahino joined #gluster
11:49 rouven joined #gluster
11:53 Guest37667 @sunnyk - did you check? sorry to chase
11:58 baber joined #gluster
12:04 phlogistonjohn joined #gluster
12:06 masuberu joined #gluster
12:09 sunnyk @ Guest37667 checked and replied
12:11 Guest37667 thanks sunnyk - but barrier is already disabled - its "features.barrier: disable"
12:22 Tartifle joined #gluster
12:25 poornimag joined #gluster
12:27 Tartifle hi; I observe a strange behavior on a test cluster (30 nodes, 2 bricks/node, gluster v3.12.1, Distributed replicated volume (replica 2)) : when I reboot one of the server, brick daemon and/or selfheal deamon are sometimes not launched properly
12:29 Tartifle more precisely : glusterd starts, it starts glusterfs and 2 glusterfsd (one per brick) and then, randomly, everything is fine, or one of the 3 process dies, or 2 of them, or all 3.
12:30 Tartifle in their respective log, whenever they fail, I can see that they had a problem connecting to the main deamon ("0-glusterfs: readv on 127.0.0.1:24007 failed (Connection reset by peer)")
12:31 Tartifle does it ring a bell to someone, or should I open a bug ticket ?
12:34 Tartifle (if I then run a "gluster volume start X force", remaining processes starts properly)
12:44 plarsen joined #gluster
13:04 DV joined #gluster
13:07 phlogistonjohn joined #gluster
13:09 omie888777 joined #gluster
13:12 _KaszpiR_ joined #gluster
13:21 plarsen joined #gluster
13:30 shyam joined #gluster
13:40 hmamtora joined #gluster
13:48 FuzzyVeg joined #gluster
13:50 legreffier joined #gluster
13:52 overclk joined #gluster
13:58 farhorizon joined #gluster
14:01 phlogistonjohn joined #gluster
14:16 skylar1 joined #gluster
14:23 msvbhat joined #gluster
14:24 aravindavk joined #gluster
14:34 msvbhat joined #gluster
14:47 sunnyk joined #gluster
14:53 farhorizon joined #gluster
14:56 kpease joined #gluster
14:57 kpease_ joined #gluster
14:57 poornimag joined #gluster
14:59 _KaszpiR_ joined #gluster
15:02 [diablo] joined #gluster
15:11 DoubleJ joined #gluster
15:11 wushudoin joined #gluster
15:12 kpease joined #gluster
15:15 gyadav joined #gluster
15:17 tom[] i've got some new hw with debian 9.2 installed and i got my gluster replicating setup working just fine except for one issue: boot. the mount options i had on ubuntu 14.04 with an older gluster version doesn't succeed in mounting the volume during boot
15:17 tom[] using debian package glusterfs-server 3.8.8-1
15:18 kramdoss_ joined #gluster
15:18 tom[] in fstab each host has
15:18 tom[] pvn1:/gv0 /home/data glusterfs defaults,_netdev 0 0
15:19 tom[] in which pvn1 resolves to one of the host's own addresses
15:20 tom[] what should i try to get this to mount during boot?
15:35 poornimag joined #gluster
15:43 Saravanakmr joined #gluster
16:00 DV joined #gluster
16:03 Tartifle tom[]: when the server is up, if you run manually "mount /home/data" (it will use the fstab entry), does it work ?
16:03 tom[] i think so. but i'll double check. brb
16:05 BlackoutWNCT joined #gluster
16:06 farhorizon joined #gluster
16:09 tom[] Tartifle: yes and no. there are three hosts all participating. much like the getting started tutorial
16:09 tom[] if i boot only one of them, mount /home/data failed
16:10 tom[] then i booted the other two. after that, with gluster volume status showing all three up, mount /home/data worked as expcted
16:12 omie888777 joined #gluster
16:12 tom[] ... except on the third host to boot, /home/data mounted automatically
16:13 BlackoutWNCT1 joined #gluster
16:18 Tartifle if the third mounted automatically, it should mean that it's not a problem of deamon launching order, and that glusterd is indeed launched before trying to mount
16:19 Tartifle so it should also mean that, when you have only one node alive, it's gluster itself that doesn't let you mount the volume
16:19 tom[] which is indeed the case
16:20 Tartifle you should take a look in /var/log/glusterfs/mnt-YOURVOLUME.log, after the manual mount fails, to see if there is a reason
16:20 Tartifle "gluster volume status" will, maybe, give some hints ?
16:21 Tartifle (i'm not a gluster expert at all, btw)
16:22 Dale_ joined #gluster
16:25 tom[] the log shows that the client wants at least one peer before it will mount the volume https://gist.github.com/tom--/4ccb84cbeef0dd4eebf39cd33626ec1e
16:25 glusterbot tom[]: https://gist.github.com/tom's karma is now -9
16:25 glusterbot Title: gist:4ccb84cbeef0dd4eebf39cd33626ec1e · GitHub (at gist.github.com)
16:25 tom[] tom-- is the closest github username i could get to tom[]
16:25 glusterbot tom[]: tom's karma is now -1
16:26 tom[] @glusterbot: lol. you're so cute
16:26 Tartifle tom[]: maybe it means "one OTHER peer"
16:27 Tartifle idk
16:27 snave joined #gluster
16:27 tom[] i think so, because the two it looks for before saying that are both OTHER peers
16:28 tom[] the log belong to the host with 10.2.0.11, the others being .12 and .13
16:30 tom[] so maybe this isn't a real problem. it's about starting up a cluster form nothing. and that doesn't need to be automatic
16:30 Tartifle think so
16:30 Tartifle and when in production, you're supposed to reboot them one by one for maintenance
16:30 Tartifle to keep things up
16:31 tom[] yup
16:31 tom[] i don't expect galera cluster to start a new cluster automatically. if it comes to that, i manage the startup of the first and the rest come up automatically
16:31 tom[] that's sufficient
17:02 skylar1 joined #gluster
17:11 DV joined #gluster
17:14 phlogistonjohn joined #gluster
17:51 DV joined #gluster
17:51 skylar1 joined #gluster
17:52 BlackoutWNCT joined #gluster
17:54 msvbhat joined #gluster
17:58 _KaszpiR_ joined #gluster
17:58 BlackoutWNCT1 joined #gluster
18:44 ahino joined #gluster
18:58 MrAbaddon joined #gluster
19:00 _KaszpiR_ joined #gluster
19:01 msvbhat joined #gluster
19:15 rouven joined #gluster
19:31 Humble joined #gluster
19:34 rouven joined #gluster
19:36 rouven joined #gluster
19:40 kpease joined #gluster
19:42 jkroon joined #gluster
19:42 atrius_ joined #gluster
19:51 DV joined #gluster
19:53 gospod2 joined #gluster
20:12 plarsen joined #gluster
20:14 Humble joined #gluster
20:24 mrcirca_ocean Hello, someone who tests benchmark of glusterfs?
20:51 msvbhat joined #gluster
20:52 mlhess joined #gluster
21:00 plarsen joined #gluster
21:21 gospod2 joined #gluster
21:22 wushudoin joined #gluster
22:28 vbellur1 joined #gluster
22:28 vbellur joined #gluster
22:29 vbellur joined #gluster
22:30 vbellur joined #gluster
22:32 hmamtora_ joined #gluster
22:34 vbellur joined #gluster
22:36 vbellur joined #gluster
22:41 vbellur joined #gluster
22:42 vbellur joined #gluster
22:43 vbellur joined #gluster
22:43 vbellur joined #gluster
22:47 msvbhat joined #gluster
22:59 gbox mrcirca_ocean: https://github.com/gluster/gbench
22:59 glusterbot Title: GitHub - gluster/gbench: Performance Benchmarking scripts for Gluster (at github.com)
22:59 Wizek_ joined #gluster
23:05 masuberu joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary