Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 tarepanda joined #gluster
00:27 vbellur joined #gluster
00:34 tarepanda @joejuian Hope your daughter's tonsillectomy went smoothly. :)
00:34 tarepanda @joejulian
01:03 shdeng joined #gluster
01:13 fassl joined #gluster
01:16 victori joined #gluster
01:22 Alghost joined #gluster
01:25 Saravanakmr joined #gluster
01:26 wushudoin joined #gluster
01:34 riyas joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:38 prasanth joined #gluster
03:02 kramdoss_ joined #gluster
03:29 susant joined #gluster
03:32 riyas joined #gluster
03:39 ppai joined #gluster
03:40 psony joined #gluster
03:48 nbalacha joined #gluster
04:07 skumar joined #gluster
04:10 atinmu joined #gluster
04:11 itisravi joined #gluster
04:19 mlg9000 joined #gluster
04:20 poornima joined #gluster
04:28 dominicpg joined #gluster
04:30 Shu6h3ndu__ joined #gluster
04:44 buvanesh_kumar joined #gluster
04:49 atalur joined #gluster
04:51 shdeng joined #gluster
05:09 Karan joined #gluster
05:11 msvbhat joined #gluster
05:12 Prasad joined #gluster
05:12 psony_ joined #gluster
05:14 ashiq joined #gluster
05:16 ashiq joined #gluster
05:20 skoduri joined #gluster
05:29 susant joined #gluster
05:29 Saravanakmr joined #gluster
05:30 atinm_ joined #gluster
05:33 karthik_us joined #gluster
05:33 ashiq joined #gluster
05:34 fiyawerx_ joined #gluster
05:43 jiffin joined #gluster
05:45 armyriad joined #gluster
05:47 ankitr joined #gluster
05:52 ankitr joined #gluster
05:54 apandey joined #gluster
05:56 armyriad joined #gluster
06:02 hgowtham joined #gluster
06:10 ankitr joined #gluster
06:11 sanoj joined #gluster
06:13 ayaz joined #gluster
06:14 ayaz left #gluster
06:16 kdhananjay joined #gluster
06:28 shdeng joined #gluster
06:29 sanoj joined #gluster
06:33 rafi joined #gluster
06:37 rastar joined #gluster
06:38 sona joined #gluster
06:44 ndarshan joined #gluster
06:55 atinm joined #gluster
06:55 msvbhat joined #gluster
06:56 mlg9000 joined #gluster
07:04 msvbhat_ joined #gluster
07:05 Intensity joined #gluster
07:11 _KaszpiR_ joined #gluster
07:14 mlg9000 joined #gluster
07:21 ekarlso joined #gluster
07:27 winrhelx joined #gluster
07:38 msvbhat joined #gluster
07:44 ankitr joined #gluster
07:45 mahendratech joined #gluster
07:53 mahendratech joined #gluster
08:15 ivan_rossi joined #gluster
08:17 dubs joined #gluster
08:17 ndarshan joined #gluster
08:19 Wizek_ joined #gluster
08:26 sanoj joined #gluster
08:32 ndarshan joined #gluster
08:36 [diablo] joined #gluster
08:46 sunkumar joined #gluster
08:56 rafi2 joined #gluster
09:00 atinm_ joined #gluster
09:09 jiffin1 joined #gluster
09:13 ndarshan joined #gluster
09:14 hasi joined #gluster
09:25 hasi Hi Guys, i'm new to gluster. I have a doubt in geo replication, I have two nodes acting as primary and another two nodes in the DR site. I'm trying to configure the geo replication. Does this needs to be done in both primary nodes? Or can i only use one?
09:33 mahendratech joined #gluster
09:53 mb_ joined #gluster
10:06 rafi1 joined #gluster
10:07 cloph not sure what you mean with "DR site" - and the geo-replication is per gluster volume, so not sure what you mean when you say you  have two "primary" nodes
10:09 msvbhat_ joined #gluster
10:14 om2 joined #gluster
10:33 mahendratech joined #gluster
10:41 atinm_ joined #gluster
11:01 sanoj joined #gluster
11:01 hasi Hi Cloph, sorry i described in terms of the architecture. We are setting up glusterfs in two sites. Primary site there are 2 gluster nodes with 6 brick each distributed replicated. DR site there are another 2 gluster nodes with 6 bricks each distributed replicated.
11:02 hasi So what i'm trying to do is to establish the geo-replication between two sites.
11:04 hasi By using the gluster volume replication. What i want to achieve is having a near real time data copy of primary site Gluster Data in the DR site
11:19 mahendratech left #gluster
11:20 mahendratech joined #gluster
11:22 shyam joined #gluster
11:22 mahendratech joined #gluster
11:23 baber joined #gluster
11:35 rastar joined #gluster
11:50 rafi1 joined #gluster
12:08 Anarka joined #gluster
12:15 itisravi joined #gluster
12:16 darshan joined #gluster
12:16 bluenemo joined #gluster
12:29 rafi1 joined #gluster
12:33 kramdoss_ joined #gluster
12:47 plarsen joined #gluster
12:49 sona joined #gluster
12:52 darshan joined #gluster
12:54 plarsen joined #gluster
12:55 buvanesh_kumar joined #gluster
12:57 jstrunk joined #gluster
13:01 buvanesh_kumar_ joined #gluster
13:03 darshan joined #gluster
13:07 rwheeler_ joined #gluster
13:10 kramdoss_ joined #gluster
13:15 plarsen joined #gluster
13:30 skylar joined #gluster
13:33 baber joined #gluster
13:36 kdhananjay joined #gluster
13:42 plarsen joined #gluster
13:42 [diablo] joined #gluster
13:51 kramdoss_ joined #gluster
13:56 nbalacha joined #gluster
14:08 ahino joined #gluster
14:08 winrhelx joined #gluster
14:12 plarsen joined #gluster
14:29 ahino1 joined #gluster
14:31 ankitr joined #gluster
14:39 DV joined #gluster
14:47 sunkumar joined #gluster
14:52 farhorizon joined #gluster
14:57 atinmu joined #gluster
15:00 om2 joined #gluster
15:04 shyam joined #gluster
15:04 kshlm Community meeting is on now in #gluster-meeting
15:06 wushudoin joined #gluster
15:11 ahino joined #gluster
15:16 msvbhat joined #gluster
15:26 cholcombe joined #gluster
15:46 susant joined #gluster
15:51 kramdoss_ joined #gluster
15:51 rastar joined #gluster
16:00 plarsen joined #gluster
16:03 rastar joined #gluster
16:05 kpease joined #gluster
16:13 koolfy joined #gluster
16:14 koolfy hello
16:14 glusterbot koolfy: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
16:18 koolfy I'm trying to debug an issue, and I can't understand why I can't cat one of the files on the local brick (.gluster/*), I don't get an error just an empty line
16:19 koolfy if I cat the corresponding file on the gluster mountpoint it works
16:20 baber joined #gluster
16:23 hasi joined #gluster
16:31 major the file is out of sync on that brick?
16:31 plarsen joined #gluster
16:35 koolfy I get the same result on the replica brick
16:35 koolfy so I don't get how the gluster mointpoint can answer but not the individual bricks
16:36 major and its just a replica volume? and no arbiters?
16:38 koolfy what do you mean? I tried it on both bricks that manage that particular file. It has permissions 1000, and other files that do not have permission 1000 I can read
16:39 major so there are only 2 bricks in the volume?
16:39 major ...
16:39 major thats a funny permission
16:39 koolfy no, a bunch of them but only replica 2
16:40 koolfy yeah, but whatever the permission, root should always be able to read, right? :)
16:41 major erm .. this is in .gluster/ ?
16:41 major well .. no .. root isn't entirely badass
16:41 major and .. you can technically take away roots permissions and make them super unspecial..
16:42 major no one really does it .. but .. its been possible for almost 10 years now to run Linux in a pure capabilities model
16:42 major that aside .. I doubt it is causing your issue
16:43 major soo .. I am curious .. why poke around in .gluster?
16:44 baber joined #gluster
16:44 atinmu joined #gluster
16:44 koolfy haha, and yes it's in .gluster but also affects the associated hardlinks
16:45 koolfy I get these on some files
16:45 koolfy cat foo
16:45 koolfy cat foo: No data available
16:45 major and volume heal <volume> info ?
16:45 koolfy in debugging this I was poking around to see if the data was present on the bricks
16:46 major oh
16:46 major I am on the same page now
16:46 major sorry for the delay .. not enough coffee
16:46 major :P
16:46 koolfy the files listed on the heal info are those that need a heal?
16:46 koolfy haha
16:47 koolfy don't worry I'm discovering a lot of how gluster works today
16:47 major those are the files that gluster will eventually cleanup behind the scenes
16:48 major basically it has things that it is working on and has the data stored, but all the book-keeping isn't totally in place yet...
16:48 koolfy how can I force those ? I'll check if my file is listed
16:48 major I usually pipe the output through grep
16:48 major I also tend to wait for it to be done before poking around...
16:49 major the client knows where to find the data and make it all work, but there is some minor performance penalties with lookups and such
16:49 koolfy yeah the problematic file isn't listed in there
16:49 major unless the client is having issues directly when you use it then I would wait for it to be done
16:49 major well .. sometimes they are listed as their gfid :(
16:50 koolfy I have a list of paths
16:50 koolfy so that should be fine
16:50 major I feel like I need an image of the Coyote holding up a sign that reads "its complicated"
16:50 major ;)
16:50 koolfy complicated is fine :)
16:52 major well .. on mntpoint side you can start with: getfattr -n glusterfs.gfid.string foo
16:54 koolfy oh I tried something similar
16:54 koolfy ok so not on the local brick then
16:55 koolfy yeah it returns the gfid
16:56 major https://gluster.readthedocs.io/en/latest/Troubleshooting/gfid-to-path/
16:56 glusterbot Title: gfid to path - Gluster Docs (at gluster.readthedocs.io)
16:57 koolfy oh but I have that path
16:57 koolfy that's where I did the initial cat actually
17:05 major getfattr -d -m . -e hex foo
17:06 major go go coffee #2
17:06 koolfy empty line
17:07 koolfy what is the -m . ?
17:07 jsierles joined #gluster
17:07 jsierles greetings
17:07 major pattern match
17:07 major koolfy, normally getfattr only dumps user.* stuff
17:07 koolfy okay
17:08 major the -m . tells it to just display everything
17:08 jsierles i'm setting up gluster for the first time. I have two peers connected across two servers. I've XFS-formatted a disk on both at /var/lib/clusterd. Is this where a 'brick' will be created?
17:08 major because .. there isn't a getfattr -a :(
17:08 koolfy :/
17:08 major jsierles, /var/lib/glusterd is where glusterd stores its runtime data .. not the brick data
17:09 koolfy but then it's a bit strange that I would get no result at all…
17:09 major koolfy, -d -m . -e hex dumps nothing?
17:09 jsierles major: OK, I figured. so the brick is just an empty filesystem, and i will use it when creating a volume?
17:09 major jsierles, correct, usually mounted off in a location such as /srv/brick1/<name>
17:10 koolfy major: nothing :(
17:10 major jsierles, correction, mounted as brick1/ but .. you kinda want to target a directory inside the mount point
17:10 major jsierles, the reason being is that gluster wants a way to validate that the mount point is mounted...
17:10 jsierles major: well i'm mounting into a docker container
17:11 major mount <device> /data/brick1/;mkdir /data/brick1/<volname>
17:12 major and use that later path when adding the brick
17:12 major if that makes sense
17:12 jsierles yeah, sounds good
17:12 major that way gluster will fail to use the mountpoint of it isn't properly mounted
17:12 major koolfy, but it returns on other files...
17:13 major or .. returns data
17:13 koolfy foo should be on the mountpoint side right?
17:13 major koolfy, this is from the brick side, not the mountpoint
17:13 koolfy not the local brick?
17:13 major hehe
17:13 koolfy oooooh
17:13 major oops .. sorry
17:13 koolfy :)
17:13 koolfy np!
17:16 major hmm
17:17 koolfy …I think I lost my gfid path hold on, too many bricks :)
17:17 major actually .. maybe get a bit more basic ..: gluster volume heal <volume> info split-brain
17:17 koolfy ok
17:17 major only replica 2?
17:18 major I feel like I asked this before
17:18 koolfy 0 entries on every brick as a result of your command
17:18 koolfy and yes only replica 2
17:18 koolfy you didn't :)
17:18 major must of thought it
17:25 koolfy oh interesting
17:25 koolfy the local brick file that the mountpoint cannot read
17:25 koolfy "no data available"
17:26 major hmmmm
17:26 koolfy I can actually cat it on the local brick
17:26 koolfy so that's good
17:26 koolfy -rw-r--r--
17:26 glusterbot koolfy: -rw-r--r's karma is now -25
17:26 major glusterbot, your parser is crap
17:26 major hmmmm
17:27 koolfy some files are chmod 1000 so those I can't, but this is not my main focus, I'm more worried about mountpoint availability
17:27 major and the gfid reported on the mountpoint matches the gfid reported on the brick?
17:27 major I have never run into this, and the person who seems to have run into everything isn't responding this morning :P
17:28 koolfy that's how I found it on the brick, with the gfid
17:28 koolfy haha
17:28 koolfy I could retry tomorrow
17:28 koolfy I must get off work now actually :)
17:28 major kk .. I am totally curious though
17:28 koolfy yeah I'll keep you posted
17:29 major kk
17:29 koolfy thanks a lot for your help shough
17:29 koolfy though
17:29 koolfy are you affiliated with the projet?
17:32 major nope
17:32 major I just work on the code when I can
17:32 major which has been a lot less recently than I want
17:32 amye #gluster is welcome to all users. :)
17:33 amye and generally everyone. So it's close enough, major.
17:33 koolfy :)
17:33 farhorizon joined #gluster
17:33 major amye, not close enough to magically teleport another bottle of Jameson onto my desk :(
17:33 major but I will get over it
17:33 JoeJulian If you're here asking or answering questions, you're officially affiliated.
17:34 atalur joined #gluster
17:34 major JoeJulian, there you are!
17:34 JoeJulian Not really
17:34 major damn I hate it when I hallucinate your presence
17:34 JoeJulian I have a deadline I'm expecting to hear that whooshing sound from as it goes by, but I'm trying.
17:35 major woke up in a panic from a dream last night over that sound
17:35 major turned out it was a passing airplane
17:36 JoeJulian For context: “I love deadlines. I like the whooshing sound they make as they fly by.” - Douglas Adams, The Salmon of Doubt
17:37 major ++
17:37 jsierles cool, i now have a replica setup
17:37 jsierles how would high availability work on the client?
17:38 jsierles say one of the replicas goes down
17:39 major jsierles, I have backupvolfile-server=<ipaddr> in the gluster clients mountopts in the fstab
17:40 jsierles i see
17:40 major it would be nice if the gluster client would figure all that out on its own and record it internally .. wishlist item I suppose
17:40 jsierles good enough for me
17:40 jsierles when would I want to use readdirp?
17:41 major I can't help but laugh when I read that ..
17:42 JoeJulian Any time you don't want to deciple (is that a word? It's like triple but 10x.) your directory read times.
17:42 major hurpa dirp...
17:42 JoeJulian er, that's when you would want to *not* use it.
17:43 JoeJulian I'm going back to working on the stuff I know nothing about since communication is clearly not in my purview this morning.
17:43 _KaszpiR_ joined #gluster
17:43 major not enough coffee to morning
17:44 vbellur joined #gluster
17:48 [diablo] joined #gluster
17:50 major this is related to this? https://github.com/gluster/glusterfs/issues/64
17:50 glusterbot Title: Implement Parallel readdirp in dht · Issue #64 · gluster/glusterfs · GitHub (at github.com)
17:51 ahino joined #gluster
17:58 vbellur koolfy: one possibility could be that the file needs self-healing
17:58 koolfy I tried doing a stat -c %y on it to force it, was that stupid ?
17:59 vbellur koolfy: have you checked the output of 'volume heal info' ?
17:59 vbellur koolfy: what version of gluster are you running?
17:59 koolfy yes, the file isn't listed there
17:59 koolfy 3.6.9
18:01 farhorizon joined #gluster
18:01 jsierles hmm, no glusterfs on coreos :/
18:02 jsierles i suppose using nfs i don't get all the nice things
18:02 baber joined #gluster
18:03 vbellur koolfy: hmm, I would try doing `find <filename> | xargs stat` on a gluster mount
18:04 koolfy Size: 0         Blocks: 0          IO Block: 131072 regular empty file
18:05 koolfy Access: (1000/---------T)  Uid: (    0/    root)   Gid: (    0/    root)
18:05 glusterbot koolfy: (1000/-------'s karma is now -1
18:05 koolfy this permission is very weird
18:06 vbellur koolfy: this probably is a linkto file created by DHT
18:06 koolfy but that's on the mountpoint
18:06 koolfy it should look like a legitimate file right?
18:06 vbellur koolfy: are you using a replicated volume? 2-way?
18:07 koolfy 2 replicas yes
18:07 koolfy 16 bricks in total
18:14 baber joined #gluster
18:16 msvbhat joined #gluster
18:20 u_nuSLASHkm8 joined #gluster
18:20 u_nuSLASHkm8 left #gluster
18:27 sunkumar joined #gluster
18:28 vbellur joined #gluster
18:31 farhorizon joined #gluster
18:34 sergem joined #gluster
18:55 bowhunter joined #gluster
19:05 major and .. the blood letting has started..
19:05 major lets see how many are given their 2-weeks today
19:05 major or this week..
19:09 vbellur koolfy: sorry, got disconnected. If you check the replicated files on bricks, is one an actual data file and the other is a linkto file?
19:21 tinyurl_comSLASH joined #gluster
19:21 tinyurl_comSLASH left #gluster
19:22 jsierles is glusterd the only process that needs to run for a server?
19:25 vbellur jsierles: glusterd is the management daemon. it spawns glusterfsds, the server daemons for data, upon starting a volume.
19:26 jsierles vbellur: OK. just wondering if that's the only thing i need to start
19:26 vbellur jsierles: yes, everything else is usually handled by glusterd
19:27 jsierles i'm running it but see: USAGE: /usr/sbin/glusterd [options] [mountpoint]
19:27 vbellur jsierles: can you try glusterd --debug ?
19:27 jsierles this is on ubuntu. actually running inside a docker container
19:27 jsierles same output
19:32 jsierles nothing seems to work
19:32 tinyurl_comSLASH joined #gluster
19:33 jsierles does it require having a minimal setup in /etc?
19:34 jsierles that looks like it
19:35 tinyurl_comSLASH left #gluster
19:35 vbellur jsierles: right, it does need /etc/glusterfs/glusterd.vol
19:35 jsierles thanks
19:36 baber joined #gluster
19:45 jsierles can i log to the console using --log-file=/dev/console?
19:47 Wizek_ joined #gluster
19:48 vbellur jsierles: /dev/stdout works
19:48 jsierles cheers
19:52 Jacob843 joined #gluster
19:55 johnnyNumber5 joined #gluster
20:19 baber joined #gluster
20:20 JesperA joined #gluster
20:26 vbellur joined #gluster
20:45 kpease joined #gluster
20:55 repnzscasb joined #gluster
20:55 repnzscasb joined #gluster
21:23 wushudoin joined #gluster
21:33 kpease joined #gluster
21:33 bwerthmann joined #gluster
21:45 shyam joined #gluster
22:20 baber joined #gluster
22:25 vbellur joined #gluster
23:19 Alghost joined #gluster
23:48 Alghost joined #gluster
23:52 om2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary