Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 Telsin joined #gluster
00:14 gvandeweyer joined #gluster
00:19 ahino joined #gluster
00:38 shdeng joined #gluster
01:09 plarsen joined #gluster
01:32 Lee1092 joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 hagarth joined #gluster
02:04 dnunez joined #gluster
02:26 Pupeno joined #gluster
03:21 magrawal joined #gluster
03:28 valkyr1e joined #gluster
03:41 valkyr1e joined #gluster
03:48 nishanth joined #gluster
04:00 atalur joined #gluster
04:01 atinm joined #gluster
04:12 nbalacha joined #gluster
04:13 overclk joined #gluster
04:24 kdhananjay joined #gluster
04:30 sanoj joined #gluster
04:33 valkyr1e joined #gluster
04:33 poornimag joined #gluster
04:35 karthik_ joined #gluster
04:36 itisravi joined #gluster
04:36 shubhendu joined #gluster
04:41 rafi joined #gluster
04:42 level7 joined #gluster
05:00 ira joined #gluster
05:03 aspandey joined #gluster
05:07 nehar joined #gluster
05:09 prasanth joined #gluster
05:13 rafi1 joined #gluster
05:17 ppai joined #gluster
05:17 raghug joined #gluster
05:19 Manikandan joined #gluster
05:19 kotreshhr joined #gluster
05:23 jiffin joined #gluster
05:24 Javezim joined #gluster
05:29 ashiq joined #gluster
05:32 ramky joined #gluster
05:32 om joined #gluster
05:38 aspandey_ joined #gluster
05:42 Bhaskarakiran joined #gluster
05:43 karthik_ joined #gluster
05:43 satya4ever joined #gluster
05:43 Apeksha joined #gluster
05:45 itisravi joined #gluster
05:47 ppai joined #gluster
05:47 hgowtham joined #gluster
05:47 skoduri joined #gluster
05:48 sanoj joined #gluster
05:49 kshlm joined #gluster
05:50 Manikandan joined #gluster
05:54 kotreshhr joined #gluster
05:55 hagarth joined #gluster
05:55 ndarshan joined #gluster
05:56 RameshN joined #gluster
05:57 julim joined #gluster
05:59 [diablo] joined #gluster
06:01 karnan joined #gluster
06:02 kovshenin joined #gluster
06:03 masuberu joined #gluster
06:04 karnan_ joined #gluster
06:09 jtux joined #gluster
06:09 pur joined #gluster
06:10 hchiramm joined #gluster
06:13 Siavash_ joined #gluster
06:14 aravindavk joined #gluster
06:16 aj__ joined #gluster
06:25 karnan joined #gluster
06:30 ashiq joined #gluster
06:45 arcolife joined #gluster
06:49 Wizek joined #gluster
06:53 Pupeno joined #gluster
06:56 ira joined #gluster
07:02 Philambdo joined #gluster
07:04 Manikandan joined #gluster
07:20 nishanth joined #gluster
07:21 devyani7_ joined #gluster
07:26 kxseven joined #gluster
07:29 raghug joined #gluster
07:29 aspandey joined #gluster
07:29 Saravanakmr joined #gluster
07:29 fsimonce joined #gluster
07:32 rafi1 joined #gluster
07:34 masuberu joined #gluster
07:42 masuberu joined #gluster
07:43 Wizek joined #gluster
07:48 ppai joined #gluster
07:50 kotreshhr joined #gluster
07:50 Pupeno joined #gluster
07:51 deniszh joined #gluster
07:57 aj__ joined #gluster
07:57 arcolife joined #gluster
07:57 archit_ joined #gluster
08:07 Wizek joined #gluster
08:07 somlin22 joined #gluster
08:10 Siavash_ joined #gluster
08:17 kshlm joined #gluster
08:22 arcolife joined #gluster
08:22 Slashman joined #gluster
08:25 ahino joined #gluster
08:29 _nixpanic joined #gluster
08:29 _nixpanic joined #gluster
08:31 itisravi joined #gluster
08:37 magrawal joined #gluster
08:38 Alghost joined #gluster
08:40 kdhananjay joined #gluster
08:40 karthik_ joined #gluster
08:40 sanoj joined #gluster
09:02 arcolife joined #gluster
09:05 Gnomethrower joined #gluster
09:09 msvbhat joined #gluster
09:10 aspandey joined #gluster
09:15 level7 joined #gluster
09:17 sanoj joined #gluster
09:22 kshlm joined #gluster
09:24 Saravanakmr joined #gluster
09:25 somlin22 joined #gluster
09:31 ashiq joined #gluster
09:34 Ryan_ joined #gluster
09:39 Manikandan joined #gluster
09:43 Pupeno joined #gluster
09:43 Pupeno joined #gluster
09:45 Saravanakmr joined #gluster
09:46 bkolden joined #gluster
09:53 legreffier hi folks :D
09:54 legreffier What's the deal with /var/run/gluster ? why is it mounted ? seems related to geo-replication but  i can't seem to find any info...
10:09 kdhananjay joined #gluster
10:14 PotatoGim joined #gluster
10:17 s-hell joined #gluster
10:17 Chinorro joined #gluster
10:18 aj__ joined #gluster
10:20 Arrfab joined #gluster
10:22 Pupeno joined #gluster
10:22 Pupeno joined #gluster
10:27 samikshan joined #gluster
10:36 bfoster joined #gluster
10:38 tdasilva joined #gluster
10:40 social joined #gluster
10:41 social joined #gluster
10:42 karthik_ joined #gluster
10:48 rafi joined #gluster
10:50 ira_ joined #gluster
10:52 itisravi joined #gluster
10:54 aravindavk legreffier: /var/run/gluster will have socket files and other files required during run time
10:54 aravindavk legreffier: any issues?
10:56 raghug joined #gluster
10:56 ndevos legreffier: /var/run is a tmpfs mountpoint on current linux distributions, /var/run/gluster should just be a directory there
11:07 B21956 joined #gluster
11:14 aj__ joined #gluster
11:16 level7 joined #gluster
11:18 jtux joined #gluster
11:20 level7 joined #gluster
11:25 level7 joined #gluster
11:27 level7 joined #gluster
11:46 somlin22 joined #gluster
11:56 jiffin joined #gluster
11:57 kkeithley Gluster Community Meeting starts in two minutes in #gluster-meeting
11:58 prasanth joined #gluster
12:02 wadeholl_ joined #gluster
12:03 ira joined #gluster
12:06 aj__ joined #gluster
12:07 karthik_ joined #gluster
12:10 pdrakeweb joined #gluster
12:23 kukulogy joined #gluster
12:23 kukulogy hello, how do I add a new bricks to the existing striped-replicate volume?
12:23 kukulogy I put
12:23 kukulogy gluster volume add-brick striped stripe 3 gluster05:/glu5 gluster06:/glu6 force
12:27 kukulogy I tried to delete some file on my mount. but it displays rm: copy-test-023: Stale NFS file handle
12:35 armin joined #gluster
12:35 hackman joined #gluster
12:37 armin joined #gluster
12:38 Philambdo1 joined #gluster
12:46 kukulogy joined #gluster
12:49 somlin22 joined #gluster
12:56 gluco joined #gluster
13:01 ahino joined #gluster
13:02 Norky joined #gluster
13:03 kdhananjay joined #gluster
13:09 pdrakeweb joined #gluster
13:11 theron_ joined #gluster
13:13 level7_ joined #gluster
13:14 Gnomethrower joined #gluster
13:21 Lee1092 joined #gluster
13:26 Gnomethrower joined #gluster
13:28 squizzi joined #gluster
13:28 somlin22 joined #gluster
13:31 Pupeno joined #gluster
13:39 Pupeno joined #gluster
13:39 pdrakeweb joined #gluster
13:41 sg_ joined #gluster
13:42 sg_ it seems that glusterfsd reverts ownership on brick directories
13:42 sg_ is that true?
13:43 sg_ I turned on auditd monitoring and it shows that glusterfsd makes the changes
13:44 sg_ is that a known issue?
13:46 ndevos sg_: depends, there is a storage.owner-uid and .owner-gid option, I think
13:46 plarsen joined #gluster
13:46 ndevos sg_: if you change the ownership through a gluster client, the values should stick...
13:48 JoeJulian side_control: I have no infiniband stuff
13:50 ndevos kukulogy: you really should not use stripe anymore, it is much better to use sharding instead, if you really insist of splitting files in pieces
13:50 sg_ ok, yeah, i'm making the changes through the mount point
13:50 sg_ ok, yeah, i'm making the changes through the _gluster_ mount point
13:51 JoeJulian imho, you should have never used ,,(stripe)
13:51 glusterbot Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
13:52 kukulogy ndevos: haven't really read sharding. Thanks! I've been told about it before but I didn't listen lol.
13:54 ndevos kukulogy: stripe is on its way out, I guess it will get disabled in upcoming releases
13:57 JoeJulian Someone should make a strip translator. Seems like 75% of the general public talks about having their data "stripped". Probably the same people that "loose" data.
14:00 kkeithley cp /dev/null $myfile. There, all the bits have been stripped
14:01 kkeithley no translator needed
14:01 post-factum distributed /dev/null ftw!
14:01 JoeJulian :)
14:01 post-factum now more data could be stripped ™
14:03 BitByteNybble110 Sounds like a perfect compression and deduplication method to me :D
14:04 hagarth joined #gluster
14:06 JoeJulian That's the complete opposite of my quantum uncertainty drive. You never need to write your file. All possible data is already in the drive, you just need to filter it for the results you want.
14:07 post-factum JoeJulian: that is the idea of pifs
14:07 somlin22 joined #gluster
14:08 post-factum JoeJulian: https://github.com/philipl/pifs
14:08 glusterbot Title: GitHub - philipl/pifs: πfs - the data-free filesystem! (at github.com)
14:08 post-factum i believe we need pifs xlator
14:08 post-factum the more nodes you have the faster store is performed
14:12 shyam joined #gluster
14:14 dnunez joined #gluster
14:19 guhcampos joined #gluster
14:19 somlin22 joined #gluster
14:20 shortdudey123 joined #gluster
14:22 skylar joined #gluster
14:23 msvbhat joined #gluster
14:24 ahino joined #gluster
14:28 pdrakeweb joined #gluster
14:31 nbalacha joined #gluster
14:33 mrten i've got a crash in glusterd when setting up geo-replication
14:33 mrten anything more that i can do besides the reporting in https://bugzilla.redhat.co​m/show_bug.cgi?id=1363613 ?
14:33 glusterbot Bug 1363613: medium, unspecified, ---, bugs, NEW , Crash of glusterd when force-creating geo-replication
14:37 legreffier ndevos: what's the point ? of mounting it ? (it's already mounted where i want it in my case)
14:37 Saravanakmr joined #gluster
14:37 guhcampos joined #gluster
14:37 legreffier + socket files are in /tmp/gsyncd-ssh-[rubbish] folders.
14:39 ndevos legreffier: /var/run/gluster should not be a mount point, so I'm not sure what happend on your environment
14:40 ndevos legreffier: there are some gluster internal processes that need to mount volumes in order to function, but they should be on a different directory
14:40 guhcampos joined #gluster
14:42 legreffier http://pastie.org/10928237
14:43 legreffier on each machine bo01.prod.poole.s1 or b2.p.fti.net are resp. machine's hostname ,  same as localhost.
14:45 legreffier i don't get why it should get mounted, they're already mounted on /var/opt/hosting/shared-volumes
15:01 bkolden joined #gluster
15:03 legreffier mrten: are you at the same version on both ends of geo-rep ? you reported 3.8.1, the log says : package-string: glusterfs 3.7.14
15:04 legreffier mrten: have you tried to start a georep to an empty node::vol ?
15:04 mrten legreffier: i upgraded from 3.7.14 to 3.8.1 halfway the bugreport
15:07 legreffier mrten: yeah got it :)
15:08 kpease joined #gluster
15:09 legreffier mrten: you have any useful output on "# gluster volume geo-replication status
15:10 legreffier see if you have some ghost-ish remains from your previous attempts...
15:10 mrten legreffier: no i havent tried starting georep on empty slave
15:10 legreffier mrten: see if you get the same behavior.
15:11 legreffier mrten: i had quite some work with georep ! but never experience some actual crash.
15:11 mrten legreffier: output of georep status differs between the bricks on which i run the command
15:12 mrten legreffier: http://pastie.org/10928274
15:13 kukulogy joined #gluster
15:13 mrten legreffier: i'm looking at https://github.com/gluster/glusterfs/blob/release​-3.8/xlators/mgmt/glusterd/src/glusterd-geo-rep.c line 2917 now but my C is too rusty :))
15:13 glusterbot Title: glusterfs/glusterd-geo-rep.c at release-3.8 · gluster/glusterfs · GitHub (at github.com)
15:16 julim joined #gluster
15:20 mrten legreffier: probably got it: my username is 14 ('georeplication') and the buffer is sized _POSIX_LOGIN_NAME_MAX, which is.... 9.
15:20 side_control im finding a lot of guides to resolve split brain but i cant seem to find anything that worked
15:21 side_control any good links?
15:21 mrten legreffier: 1990's called, they want their limits back
15:21 side_control its just one file https://paste.fedoraproject.org/400754/47023768/
15:21 glusterbot Title: #400754 Fedora Project Pastebin (at paste.fedoraproject.org)
15:23 bowhunter joined #gluster
15:24 mrten side_control: https://access.redhat.com/documentation/en-US​/Red_Hat_Storage/3.1/html-single/Administrati​on_Guide/index.html#sect-Managing_Split-brain ?
15:24 glusterbot Title: Administration Guide (at access.redhat.com)
15:25 pdrakeweb joined #gluster
15:25 B21956 joined #gluster
15:26 JoeJulian http://gluster.readthedocs.io/en/latest/Trouble​shooting/heal-info-and-split-brain-resolution/
15:26 glusterbot Title: Split Brain (Auto) - Gluster Docs (at gluster.readthedocs.io)
15:26 kukulogy joined #gluster
15:26 side_control gluster volume heal VOL split-brain METHOD ?
15:27 somlin22 joined #gluster
15:27 hagarth joined #gluster
15:35 somlin22 joined #gluster
15:36 rettori joined #gluster
15:41 side_control well it says its going under heal, we'll see'
15:42 legreffier mrten: im unsure you can have several masters to the same slave
15:43 legreffier mrten: gluster-3 appears as created , did you try to start it ?
15:43 legreffier mrten: if you had write to gluster-4 , you WILL have to fully wipe it i think
15:44 legreffier (or expect trouble in the future)
15:51 elico joined #gluster
15:51 kukulogy joined #gluster
15:55 mrten legreffier: your last line makes me go :(
15:56 level7 joined #gluster
15:58 mrten legreffier: i'd like to initialize a georep slave with data to prevent re-copying 400G of data between two site locations (the data is already in a replicated db on the two locations and i'd like to migrate the data from the db to gluster)
15:58 hackman joined #gluster
16:02 elico I have read the article at: https://joejulian.name/blog​/dht-misses-are-expensive/ and I wanted to "simulate" a similar but simplified code that will demonstrate peer selection algorithm.
16:02 elico My issue is that I am not familiar with C. Can anyone help me to simplify the idea?
16:02 glusterbot Title: DHT misses are expensive (at joejulian.name)
16:05 guhcampos joined #gluster
16:17 legreffier mrten: i'd advise against doing that, as gluster will run a huge find over all of your data anyway and will take plenty of time
16:17 legreffier you should start blank
16:19 wushudoin joined #gluster
16:19 wushudoin joined #gluster
16:26 harish_ joined #gluster
16:33 somlin22 joined #gluster
16:34 aj__ joined #gluster
16:37 mrten legreffier: i was kinda hoping that if i'd put the data in the same place on the georep slave as on the master that the g/rsync daemon would recognise that there's no difference
16:37 mrten legreffier: thanks for the help until now, gotta eat
16:40 bwerthmann joined #gluster
16:43 hagarth joined #gluster
16:51 hagarth1 joined #gluster
17:06 Abrecus joined #gluster
17:15 armyriad joined #gluster
17:17 armyriad joined #gluster
17:43 ahino joined #gluster
17:45 cloph what's the distinction between server.event-threads and client.event-threads ?
17:46 cloph and is https://access.redhat.com/documentation/en-U​S/Red_Hat_Storage/3/html/Administration_Guid​e/Small_File_Performance_Enhancements.html a mistake? "1. As each thread processes a connection at a time, having more threads than connections to either the brick processes (glusterfsd) or the client processes (glusterfs or gfapi) is not recommended. […]"
17:46 glusterbot Title: 9.5. Small File Performance Enhancements (at access.redhat.com)
17:47 nehar joined #gluster
17:55 skoduri joined #gluster
17:56 julim joined #gluster
17:58 shyam cloph: server.event-threads controls the number of threads reading requests from the connected end points on the brick processes, client.event-threads is the control the same on the client side (reading responses from the bricks)
17:59 cloph that's still a unclear to me - what would the end-points be in this case?
17:59 shyam cloph: So if you set the thread count larger than the active connections it does not help, which is what the admin guide is referring to... (active connections for clients are number of bricks, and for bricks are number of clients)
17:59 cloph say I have a volume on with only one brick and only mount that from the same host. Would the thread-counts matter at all?
18:00 cloph ah, so if I have a volume with three bricks, a client-thread count of >3 doesn't make sense?
18:00 shyam cloph: no, so there is a single connection here, from the client (mount) and from the brick, so adding more threads does not make sense (for the 1 brick, 1 client case)
18:01 shyam yes, as the client is at max connected to only 3 bricks, and each of these "event-threads" process a connection at a time, so any extra threads are just that, extra
18:02 shyam hence the reference to netstat and checking the number of connections, etc.
18:02 shyam in the documentation that you pointed out...
18:02 cloph yeah, but doesn't mention what to look for in the netstat...
18:03 shyam Hmmm... ok the answer would be connection to *a* brick port, or from a client perspective, connections to *any* of the brick ports (can you post a bug or a comment on that doc. for further clarification, as an aside)
18:06 cloph Let's see whether I did understand first :-) so if I have a volume consisting of three bricks, having a client thread-count of three is the maximum useful value, since all reads/writes can only be redirected to one of the three bricks. Buut on server-side it correlates with the number of client servers that have the volume mounted?
18:07 shyam client side thread count: your understanding checks :)
18:09 shyam server side thread count: your understanding checks, iff you mean any type of mount (NFS, FUSE, other) when you state "client servers" and also include gluster service deamons like self heal deamon (which will create its own connections to it's replica pairs), (determination of which is possibly simplified using a netstat than working through "what else could they be" type thoughts, I would think)
18:10 cloph ah, thanks for that clarification, indeed I didn't count the self-heal daemon for example, but nfs,fuse,.. mounts.
18:15 shyam cloph: Also, pushing this (server threads) to the maximum, or cover all connections, may not really be what is needed to improve the performance, just saying. The threads should keep up with incoming work and not starve requests in the queue, as far as that is satisfied (check using the TCP send/recv Q sizes, or how busy the epoll thread is in the brick deamon (threaded 'top' view of the glusterfsd process)) there should not be a great nee
18:17 guhcampos_ joined #gluster
18:21 cloph yeah, thought so.. as to filing a bug, I guess the gluster github project is the wrong place?
18:22 elico joined #gluster
18:23 shyam Ummm... yeah that would not work... I thought there was a comment button down there in the docs. but apparently not so. I'll take care of notifying the right people then, I guess ;)
18:25 cloph thanks a bunch.
18:25 shyam cloph: Done (at least it is in my sent box ;) )
18:29 elico joined #gluster
18:36 guhcampos joined #gluster
18:36 deniszh joined #gluster
18:45 hagarth joined #gluster
18:49 elico joined #gluster
19:01 gluster-newb joined #gluster
19:06 pdrakeweb joined #gluster
19:06 gluster-newb So we have a 3-node replicated cluster. Our gluster client (mounted via FUSE) got disconnected from a brick because of a 42 second ping-timeout (probably caused because of heavy IO load).  How long does it take for the client to connect back to the brick and is that configurable?
19:09 gluster-newb sorry.  our gluster client is mounted via glusterfs (which I think is FUSE)
19:09 bwerthmann joined #gluster
19:20 johnmilton joined #gluster
19:24 om joined #gluster
19:34 ahino joined #gluster
19:35 om joined #gluster
19:40 aj__ joined #gluster
19:45 hagarth joined #gluster
19:50 Siavash_ joined #gluster
19:51 Siavash_ joined #gluster
19:52 om joined #gluster
19:54 armyriad joined #gluster
19:56 shaunm joined #gluster
19:56 dgandhi joined #gluster
19:58 dgandhi joined #gluster
20:11 Siavash__ joined #gluster
20:12 armyriad joined #gluster
20:21 Siavash__ joined #gluster
20:39 elico joined #gluster
20:40 johnmilton joined #gluster
20:42 bwerthmann joined #gluster
21:05 om joined #gluster
21:26 deniszh joined #gluster
21:50 kpease_ joined #gluster
21:57 kpease joined #gluster
22:09 plarsen joined #gluster
22:28 hagarth joined #gluster
22:32 Pupeno joined #gluster
22:37 julim joined #gluster
22:37 kukulogy joined #gluster
23:08 plarsen joined #gluster
23:23 d0nn1e joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary