Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 bwerthmann joined #gluster
01:01 bwerthmann joined #gluster
01:08 siel joined #gluster
01:18 EinstCrazy joined #gluster
01:49 EinstCra_ joined #gluster
01:55 bwerthmann joined #gluster
02:14 siel joined #gluster
02:27 kshlm joined #gluster
02:40 DV_ joined #gluster
02:45 haomaiwa_ joined #gluster
02:49 bwerthmann joined #gluster
02:50 ramteid joined #gluster
02:52 alghost__ left #gluster
02:56 Lee1092 joined #gluster
03:01 haomaiwa_ joined #gluster
03:03 EinstCrazy joined #gluster
03:41 haomaiwang joined #gluster
03:43 bwerthmann joined #gluster
04:01 nbalacha joined #gluster
04:01 haomaiwang joined #gluster
04:01 18VAAPB9X joined #gluster
04:06 hchiramm joined #gluster
04:12 shubhendu joined #gluster
04:13 shubhendu joined #gluster
04:15 overclk joined #gluster
04:20 gem joined #gluster
04:38 hgowtham joined #gluster
04:38 gem_ joined #gluster
04:41 RameshN joined #gluster
04:41 geniusoflime joined #gluster
04:42 geniusoflime any ganesha experts in the house? How do I update an exported volume ACL
04:48 karnan joined #gluster
04:52 rafi joined #gluster
04:52 Manikandan joined #gluster
04:53 Manikandan joined #gluster
04:57 ashiq joined #gluster
05:01 haomaiwa_ joined #gluster
05:03 hackman joined #gluster
05:09 aspandey joined #gluster
05:25 ppai joined #gluster
05:28 poornimag joined #gluster
05:28 Apeksha joined #gluster
05:30 karthik___ joined #gluster
05:32 bwerthmann joined #gluster
05:37 skoduri joined #gluster
05:41 nehar joined #gluster
05:44 [diablo] joined #gluster
05:46 Bhaskarakiran joined #gluster
05:46 DV_ joined #gluster
06:02 hackman joined #gluster
06:05 nishanth joined #gluster
06:08 kdhananjay joined #gluster
06:10 spalai joined #gluster
06:22 mhulsman joined #gluster
06:25 anil_ joined #gluster
06:25 bwerthmann joined #gluster
06:34 Wizek joined #gluster
06:35 kotreshhr joined #gluster
06:36 jtux joined #gluster
06:42 skoduri joined #gluster
06:42 ramky joined #gluster
06:43 kdhananjay1 joined #gluster
06:45 kovshenin joined #gluster
06:46 [Enrico] joined #gluster
06:47 haomaiwa_ joined #gluster
06:48 nbalacha joined #gluster
06:48 RameshN joined #gluster
06:49 aravindavk joined #gluster
06:54 msvbhat joined #gluster
06:55 pur_ joined #gluster
07:02 haomaiwa_ joined #gluster
07:03 deniszh joined #gluster
07:04 Wizek joined #gluster
07:12 [Enrico] joined #gluster
07:13 aravindavk joined #gluster
07:19 jri joined #gluster
07:23 ctria joined #gluster
07:26 nbalacha joined #gluster
07:26 bluenemo joined #gluster
07:28 Apeksha joined #gluster
07:28 haomaiwa_ joined #gluster
07:31 kdhananjay joined #gluster
07:35 mbukatov joined #gluster
07:40 arcolife joined #gluster
07:43 jwd joined #gluster
07:44 fsimonce joined #gluster
07:46 RameshN joined #gluster
07:53 Slashman joined #gluster
08:01 Marbug joined #gluster
08:03 skoduri joined #gluster
08:06 hchiramm joined #gluster
08:14 bwerthmann joined #gluster
08:16 wnlx joined #gluster
08:28 Acinonyx joined #gluster
08:28 Acinonyx Hello
08:28 glusterbot Acinonyx: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:35 kdhananjay joined #gluster
08:36 aspandey_ joined #gluster
08:40 goretoxo joined #gluster
08:45 nbalacha joined #gluster
08:54 rastar joined #gluster
08:57 skoduri joined #gluster
08:59 rouven joined #gluster
09:08 bwerthmann joined #gluster
09:10 harish_ joined #gluster
09:15 harish_ joined #gluster
09:20 Acinonyx Is GlusterFS fully POSIX compliant?
09:21 post-factum more or less, excluding some specific features like mandatory locking
09:21 Acinonyx hmm
09:22 Acinonyx So, what happens if someone uses locking?
09:22 post-factum afaik, noone with bright mind uses mandatory locking :)
09:23 Acinonyx Let's say with FUSE client: does it return an error or leads to a deadlock?
09:26 * anoopcs would like to remind that mandatory locking is not POSIX complaint.
09:28 Acinonyx anoopcs: are you sure? fcntl() is mentioned in the POSIX standard
09:29 anoopcs Acinonyx, fcntl is as per POSIX, but not mandatory locks.
09:29 anoopcs Acinonyx, I hope you understand the difference between two.
09:30 Acinonyx tru
09:30 post-factum Mandatory locking is not specified by POSIX.  Some other systems also support mandatory locking, although the details of how to enable it vary across systems.
09:30 post-factum hmm, sorry then
09:30 post-factum have read the man
09:30 Acinonyx :)
09:31 Acinonyx I can see a translator features/locks
09:31 kdhananjay joined #gluster
09:31 anoopcs Acinonyx, https://github.com/torvalds/linux/blob/master/Documentation/filesystems/mandatory-locking.txt#L62
09:31 glusterbot Title: linux/mandatory-locking.txt at master · torvalds/linux · GitHub (at github.com)
09:32 Acinonyx anoopcs: yes, I understand the difference
09:32 anoopcs Acinonyx, Yes. locks translator handles all fcntl lock requests.
09:32 post-factum mlocks are going to be removed from kernel, btw
09:32 nbalacha joined #gluster
09:32 Gnomethrower joined #gluster
09:41 ira joined #gluster
09:43 Acinonyx that's interesting.. I have locked a file in a Gluster FS mounted through fuse and 'lslock' does not show the lock
09:43 Acinonyx shouldn't it propagate? It's v3.6.9 btw
09:44 harish_ joined #gluster
09:45 atinm joined #gluster
09:47 post-factum https://github.com/logikal/lslock ← this?
09:47 glusterbot Title: GitHub - logikal/lslock: lslock - list the locks in a given directory (at github.com)
09:47 * post-factum is trying to find out wtf lslock is
09:47 Acinonyx sorry, 'lslocks'
09:47 Acinonyx missed an 's'
09:47 post-factum ah, ok
09:47 Acinonyx 'util-linux'
09:52 Acinonyx I tried the same with 'sshfs' FUSE and it shows
09:52 post-factum yup, with 3.7.9 client lock is not shown as well
09:52 Acinonyx that is bad
09:53 post-factum anoopcs: ^^ is this thing subjected to be a start point for bugreport?
09:53 anoopcs Acinonyx, We don't propogate down fcntl locks down to bricks. we handle it internally.
09:54 Acinonyx anoopcs: it is the opposite
09:54 Acinonyx I am checking the FUSE
09:54 * anoopcs haven't used lslocks though.
09:54 post-factum it should be reflected in lslocks for fuse mountpount
09:54 post-factum s/mountpount/mountpoint/
09:54 glusterbot What post-factum meant to say was: it should be reflected in lslocks for fuse mountpoint
09:54 anoopcs For that I need to check how lslocks is implemneted
09:54 Acinonyx the internal locks should propagate to FUSE
09:55 Acinonyx I'll take a look. I will also attempt to lock the whole file, the same region or partially overlapping region
09:55 anoopcs Acinonyx, man page for lslocks says it is for local system locks
09:56 Acinonyx it works for 'sshfs'
09:56 Acinonyx give me a few minutes to check the implementation and I'll report back
09:57 anoopcs Acinonyx, Ah..I think they are using the /proc/ subsytem to query the locks
09:58 post-factum Acinonyx: could you please another glusterfs fuse client?
09:58 post-factum just curious if it works
09:58 post-factum Acinonyx: https://github.com/gluster/xglfs
09:58 aspandey_ joined #gluster
09:58 glusterbot Title: GitHub - gluster/xglfs: GlusterFS API FUSE client (at github.com)
09:59 Acinonyx I'll try it post-factum
09:59 Acinonyx thanks
09:59 anoopcs post-factum, Acinonyx: I think I guessed it right. It uses /proc/ for querying locks
10:00 post-factum anoopcs: so glusterfs fuse bridge does not update /proc
10:00 anoopcs So you won't get fcntl byte range file locks by GlusterFS.
10:00 Acinonyx anoopcs: yes, it's '/proc'
10:01 post-factum anoopcs: hmm but i saw rangelocks-related patches in gerrit...
10:01 anoopcs post-factum, I mean we don't handle down fcntl locks down to local file system in order to allow /proc/ to get updated.
10:01 anoopcs post-factum, It's all in-memory.
10:01 post-factum anoopcs: but should we?
10:01 anoopcs Acinonyx, That's why lslocks is unable to fetch lock list based on some pid or something
10:02 bwerthmann joined #gluster
10:02 Acinonyx anoopcs: ok, I will test some scenarios and get back to you. If it does not affect F_SETLK then it should be fine
10:03 anoopcs Acinonyx, What do you mean by 'If it does not affect F_SETLK then it should be fine' ?
10:04 anoopcs I would like to convey that GlusterFS works very well with fcntl locks(advisory).
10:05 anoopcs post-factum, This is how it is designed.
10:06 Acinonyx anoopcs: I mean that as you describe it, I cannot see it in '/proc' but it is there
10:06 Acinonyx so, F_SETLK will block if i try to lock the same or overlapping region even if I do not see the range in '/proc' as locked
10:08 anoopcs Acinonyx, I would say that for locks within GlusterFS do not look into /proc.
10:08 anoopcs But things will work as per POSIX standards.
10:08 anoopcs If you see anything missing report here and we will analyze the case.
10:09 Acinonyx thanks
10:10 anoopcs Acinonyx, yw.
10:36 atinm joined #gluster
10:46 caitnop joined #gluster
10:55 freakzz joined #gluster
10:55 freakzz left #gluster
10:56 bwerthmann joined #gluster
10:57 karnan joined #gluster
11:02 lezo_ joined #gluster
11:02 anti[Enrico] joined #gluster
11:02 scobanx joined #gluster
11:03 PotatoGim_ joined #gluster
11:04 Acinonyx Hmm, it seems that there is no deadlock detection in GlusterFS POSIX locks
11:04 ic0n_ joined #gluster
11:04 mkzero_ joined #gluster
11:04 nixpanic_ joined #gluster
11:04 Acinonyx two processes locking different byte ranges can easily deadlock
11:04 nixpanic_ joined #gluster
11:05 pocketprotector- joined #gluster
11:05 rjoseph|1fk joined #gluster
11:05 anoopcs Acinonyx, You are right. Deadlock detection is not at all complete in GlusterFS. Can you explain the procedure you followed?
11:05 kkeithley1 joined #gluster
11:06 XpineX_ joined #gluster
11:06 anoopcs I mean the flow of events.
11:06 Acinonyx yes
11:06 Acinonyx give me a minute
11:06 anoopcs sure
11:06 marlinc joined #gluster
11:06 Acinonyx any prefered pastebin?
11:06 anoopcs @paste
11:07 glusterbot anoopcs: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
11:07 anoopcs Acinonyx, ^^
11:07 scobanx Hi, my setup is 78x(16+4) disperse volume. While I was testing from 50 clients write test, each client with two threads, I see that only 1/3 of nodes receive writes. How hashing of files to subvolumes done?
11:07 pocketprotector joined #gluster
11:07 owlbot` joined #gluster
11:07 anoopcs Acinonyx, or fpaste.org
11:08 marlinc joined #gluster
11:08 mmckeen joined #gluster
11:08 tbm_ joined #gluster
11:10 shyam joined #gluster
11:11 Acinonyx http://termbin.com/nooh
11:11 saltsa joined #gluster
11:11 csaba joined #gluster
11:12 Acinonyx './glusterfs-deadlocker mylockfile 1' and then './glusterfs-deadlocker mylockfile 2'
11:13 anoopcs Acinonyx, Can you please do the paste in fpaste.org? For some weird reasons I can't access termbin.com
11:13 anoopcs sorry
11:13 Acinonyx the first locks region 0-1 and after 10 seconds region 2-3
11:13 Acinonyx the seconf does the opposite; region 2-3 and after 10s 0-1
11:14 Acinonyx this leads to a deadlock
11:14 Acinonyx https://paste.fedoraproject.org/356956/
11:14 glusterbot Title: #356956 Fedora Project Pastebin (at paste.fedoraproject.org)
11:14 cholcombe joined #gluster
11:15 Acinonyx lol @glusterbot
11:16 shortdudey123 joined #gluster
11:17 kotreshhr left #gluster
11:17 Guest89761 joined #gluster
11:18 tdasilva joined #gluster
11:20 bluenemo joined #gluster
11:20 hagarth joined #gluster
11:21 anoopcs Acinonyx, This is a classic example for deadlock and as I mentioned above deadlock detection is missing within GlusterFS.
11:21 Acinonyx yup
11:21 johnmilton joined #gluster
11:22 anoopcs Acinonyx, I had done some work related to this area sometime before but was not completed.
11:23 EinstCrazy joined #gluster
11:23 anoopcs Deadlock detection in a distributed file system is bit more complicated.
11:23 Acinonyx yes
11:25 Acinonyx Is your work somewhere publicly available? I would like to make an attempt to continue it, if this ok with you
11:27 anoopcs Acinonyx, That was basically a blind attempt to imitate what linux kernel does which will not work in cases where deadlock occurs with locking contentions on two different files.
11:28 anoopcs Acinonyx, Here it is: https://github.com/anoopcs9/glusterfs/tree/deadlock
11:28 glusterbot Title: GitHub - anoopcs9/glusterfs at deadlock (at github.com)
11:28 Acinonyx yes, that was my initial thought, too. Check how the kernel does it and try to do the same
11:28 Acinonyx thank you
11:33 Manikandan joined #gluster
11:45 ppai joined #gluster
11:50 bwerthmann joined #gluster
11:51 Qu310 joined #gluster
11:54 Qu310 left #gluster
12:07 EinstCrazy joined #gluster
12:08 nottc joined #gluster
12:11 hgowtham joined #gluster
12:12 kdhananjay joined #gluster
12:19 ppai joined #gluster
12:20 russoisraeli joined #gluster
12:23 ic0n joined #gluster
12:24 mhulsman joined #gluster
12:25 russoisraeli joined #gluster
12:30 julim joined #gluster
12:32 pdrakeweb joined #gluster
12:32 russoisraeli joined #gluster
12:42 crashmag joined #gluster
12:44 bwerthmann joined #gluster
12:44 aspandey joined #gluster
12:47 aspandey_ joined #gluster
12:52 nhayashi joined #gluster
12:56 hgichon joined #gluster
12:56 Ulrar joined #gluster
12:56 nottc joined #gluster
12:57 unclemarc joined #gluster
12:59 DV__ joined #gluster
13:01 DV_ joined #gluster
13:10 nbalacha joined #gluster
13:13 nbalacha joined #gluster
13:14 ivan_rossi joined #gluster
13:16 itisravi joined #gluster
13:18 Hesulan joined #gluster
13:20 Shirwa joined #gluster
13:21 bwerthmann joined #gluster
13:21 Shirwa Hi All, Is it possible to create distributed-replicated volume using only single brick between two nodes?
13:23 rafi1 joined #gluster
13:23 atalur joined #gluster
13:23 moss joined #gluster
13:24 sakshi joined #gluster
13:25 jiffin joined #gluster
13:26 bennyturns joined #gluster
13:27 ctria joined #gluster
13:27 post-factum Shirwa: could you please be more specific?
13:28 bennyturns joined #gluster
13:28 mpietersen joined #gluster
13:30 Shirwa post-factum: Is there any limitation on how many bricks you can create per volume?
13:32 post-factum Shirwa: in general, no
13:33 post-factum Shirwa: what is the task you try to solve?
13:35 atinm joined #gluster
13:36 Shirwa post-factum: I have configured two node glusterfs distributed replicated volume, two bricks on each server. Is it better to use single brick on each node or two?
13:37 xMopxShell joined #gluster
13:37 Hesulan joined #gluster
13:42 post-factum Shirwa: depends on what you try to achieve
13:43 post-factum Shirwa: it is about flexibility as well. whether you will shrink/expand volumes or so
13:47 kdhananjay joined #gluster
13:48 post-factum Shirwa: i prefer breaking distributed-replicated address space into multiple bricks to make the layout flexible and to parallel io load
13:50 plarsen joined #gluster
13:52 lalatenduM joined #gluster
13:55 shubhendu joined #gluster
13:55 arcolife joined #gluster
13:57 Shirwa post-factum: Thanks
14:01 btpier joined #gluster
14:02 ctria joined #gluster
14:13 JonathanD joined #gluster
14:23 amye joined #gluster
14:26 kovshenin joined #gluster
14:33 mhulsman joined #gluster
14:41 hgichon0 joined #gluster
14:48 coredump|br joined #gluster
14:51 kshlm joined #gluster
14:58 atrius joined #gluster
15:01 shyam left #gluster
15:02 shubhendu joined #gluster
15:04 dlambrig_ joined #gluster
15:05 nottc joined #gluster
15:08 jbrooks joined #gluster
15:13 nigelb joined #gluster
15:14 bennyturns joined #gluster
15:15 julim joined #gluster
15:16 shyam joined #gluster
15:26 Apeksha joined #gluster
15:32 spalai left #gluster
15:35 aspandey joined #gluster
15:40 devilspgd joined #gluster
15:48 cuqa_ joined #gluster
15:51 jobewan joined #gluster
15:52 jobewan joined #gluster
15:55 jlp1 if i have a distributed volume consisting of a single brick, can i replace that brick while the volumes online?  can i add another brick and then remove the original?
16:02 farhorizon joined #gluster
16:04 rafi joined #gluster
16:06 rouven joined #gluster
16:10 raginbajin joined #gluster
16:10 Champi joined #gluster
16:16 samikshan joined #gluster
16:17 hagarth joined #gluster
16:26 vmallika joined #gluster
16:30 edong23 joined #gluster
16:33 skoduri joined #gluster
16:35 amye joined #gluster
16:45 JoeJulian jlp1: yes and yes
16:45 kpease joined #gluster
16:50 kpease joined #gluster
16:58 Gnomethrower joined #gluster
17:00 nbalacha joined #gluster
17:14 jlp1 JoJulian: thanks
17:26 vmallika joined #gluster
17:26 ivan_rossi left #gluster
17:29 Gnomethrower joined #gluster
17:30 rafi joined #gluster
17:31 prasanth joined #gluster
17:33 coredump|br joined #gluster
17:52 arcolife joined #gluster
17:58 prasanth joined #gluster
17:59 nathwill joined #gluster
17:59 jwd joined #gluster
18:04 skoduri joined #gluster
18:07 amye joined #gluster
18:07 nathwill is making the existing NFS system a gluster server, then gradually adding nodes still the best path for migrating from NFS to gluster?
18:08 nathwill we were planning on doing an rsync with a cutover window, but it looks like our data set's too large to do this with reasonable amounts of downtime
18:08 JoeJulian It is a path. "best" depends on use case
18:08 JoeJulian Seems reasonable based on that statement.
18:12 nishanth joined #gluster
18:12 nathwill ok. time to rewrite this migration plan then :D
18:19 nathwill thanks, JoeJulian :)
18:20 JoeJulian You're welcome.
19:04 deniszh joined #gluster
19:13 sadbox joined #gluster
19:21 raginbajin joined #gluster
19:21 amye joined #gluster
19:24 hgichon joined #gluster
19:28 russoisraeli joined #gluster
19:37 ctria joined #gluster
19:45 purpleidea joined #gluster
19:49 rauchrob joined #gluster
19:58 ParsectiX joined #gluster
20:00 ParsectiX joined #gluster
20:18 mattmcc joined #gluster
20:23 wnlx joined #gluster
20:30 ic0n joined #gluster
20:42 ron-slc joined #gluster
20:57 hgichon0 joined #gluster
21:25 johnmilton joined #gluster
21:35 jobewan joined #gluster
22:26 plarsen joined #gluster
22:32 johnmilton joined #gluster
22:34 cliluw joined #gluster
22:43 hackman joined #gluster
22:53 russoisraeli joined #gluster
23:10 amye joined #gluster
23:34 dlambrig_ joined #gluster
23:46 Hesulan joined #gluster
23:50 russoisraeli joined #gluster
23:54 farhorizon joined #gluster
23:59 MugginsM joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary