Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 nh2 JoeJulian: should I see things like this in DEBUG glusterd.log?
00:03 nh2 Trying to acquire lock of vol N/A for c2038af1-6f0f-42ca-a6fd-69d452416992
00:03 nh2 why is my volume called N/A?
00:03 JoeJulian glusterd.log isn't a volume, it's the management daemon.
00:03 rastar joined #gluster
00:04 nh2 JoeJulian: so the fact that it says "vol N/A" is OK?
00:04 JoeJulian I would expect so.
00:06 nh2 in the glusterd log on the node where I execute the command I see this:
00:06 nh2 [2017-05-09 00:05:22.242902] D [MSGID: 0] [glusterd-syncop.c:1765:gd_brick_op_phase] 0-management: Sent op req to 0 bricks
00:06 nh2 [2017-05-09 00:05:22.254533] E [MSGID: 106336] [glusterd-geo-rep.c:5230:glusterd_op_sys_exec] 0-management: Unable to end. Error : Success
00:06 nh2 "Sent op req to 0 bricks" - how can you send something to 0 targets?
00:07 john51 joined #gluster
00:21 plarsen joined #gluster
00:23 itisravi joined #gluster
00:31 kramdoss_ joined #gluster
00:39 farhorizon joined #gluster
00:43 gyadav joined #gluster
00:49 plarsen joined #gluster
00:51 plarsen joined #gluster
00:53 bmurt joined #gluster
01:08 plarsen joined #gluster
01:30 shdeng joined #gluster
01:31 derjohn_mob joined #gluster
01:51 gyadav joined #gluster
02:05 alekun joined #gluster
02:05 alekun Hi everybody!
02:08 alekun ey I have a question for the channel, we are evaluating Gluster in HCI mode, anyone has a recommendation regarding hard disk setup?
02:08 Peppard joined #gluster
02:21 Wizek__ joined #gluster
02:23 alejojo joined #gluster
02:27 alekun joined #gluster
02:41 k0nsl joined #gluster
02:41 k0nsl joined #gluster
02:57 alejojo joined #gluster
03:01 alekun joined #gluster
03:08 kramdoss_ joined #gluster
03:11 Gambit15 joined #gluster
03:26 alejojo joined #gluster
03:28 nbalacha_ joined #gluster
03:31 alekun joined #gluster
03:34 prasanth joined #gluster
03:46 riyas joined #gluster
03:47 itisravi joined #gluster
03:47 atinm joined #gluster
03:47 sanoj joined #gluster
04:07 gyadav joined #gluster
04:07 msvbhat joined #gluster
04:09 apandey joined #gluster
04:12 Prasad joined #gluster
04:16 pdrakewe_ joined #gluster
04:49 apandey joined #gluster
04:50 ankitr joined #gluster
04:51 buvanesh_kumar joined #gluster
05:00 skumar joined #gluster
05:05 ankitr joined #gluster
05:09 Shu6h3ndu joined #gluster
05:12 karthik_us joined #gluster
05:20 _KaszpiR_ JoeJulian thx for info, yes, as I said later looks like facter lib needs a tweak, cause now it has kinda retarded way of gathering info about bricks, based on hostnames, while I think it should use uuid
05:23 ashiq joined #gluster
05:24 ndarshan joined #gluster
05:25 hgowtham joined #gluster
05:27 poornima joined #gluster
05:42 amarts joined #gluster
05:44 rafi joined #gluster
05:51 kdhananjay joined #gluster
05:51 msvbhat joined #gluster
06:00 kotreshhr joined #gluster
06:01 skoduri joined #gluster
06:03 Karan joined #gluster
06:04 skumar_ joined #gluster
06:04 ppai joined #gluster
06:05 Saravanakmr joined #gluster
06:07 rafi joined #gluster
06:21 kramdoss_ joined #gluster
06:28 sona joined #gluster
06:43 bartden joined #gluster
06:52 mlg9000 joined #gluster
06:56 jiffin joined #gluster
06:56 kramdoss_ joined #gluster
07:02 ivan_rossi joined #gluster
07:02 mbukatov joined #gluster
07:05 msvbhat joined #gluster
07:06 ivan_rossi left #gluster
07:20 fsimonce joined #gluster
07:35 armyriad joined #gluster
07:40 armyriad joined #gluster
07:55 major joined #gluster
08:04 shutupsquare joined #gluster
08:05 nh2 joined #gluster
08:06 kramdoss_ joined #gluster
08:10 glisignoli joined #gluster
08:14 panina joined #gluster
08:18 msvbhat joined #gluster
08:18 apandey joined #gluster
08:26 MrAbaddon joined #gluster
08:37 flying joined #gluster
08:40 amarts joined #gluster
08:46 shutupsquare joined #gluster
08:49 atinm joined #gluster
08:52 nh2 JoeJulian: for my "Failed to end." error yesterday, I got further by adding an extra log entry tat I think should be there anyway: https://github.com/nh2/glusterfs/commit/59e742c395451b60887c9ea5b39eeb534f803088
08:52 glusterbot Title: glusterd-geo-rep: DEBUG log the command being run · nh2/glusterfs@59e742c · GitHub (at github.com)
08:53 nh2 that shows that `peer_georep-sshkey.py node-generate .` fails with an import error: "ImportError: No module named gluster.cliutils"
08:54 nh2 the presence of the import error is my fault (I'm packaging the gluster python scripts for NixOS and have not yet set up all pythonpaths correctly)
08:54 kramdoss_ joined #gluster
08:55 nh2 but the problem here is that the error message of the thing that calls it claims "Success", and claims that the stderr is empty, when clearly there's a stderr output
08:59 nh2 JoeJulian: I don't understand this code: https://github.com/nh2/glusterfs/blob/59e742c395451b60887c9ea5b39eeb534f803088/xlators/mgmt/glusterd/src/glusterd-geo-rep.c#L5228
08:59 nh2 Why does it use `errno`? runner_end() does not claim to set errno
08:59 glusterbot Title: glusterfs/glusterd-geo-rep.c at 59e742c395451b60887c9ea5b39eeb534f803088 · nh2/glusterfs · GitHub (at github.com)
09:19 atinm joined #gluster
09:21 skumar_ joined #gluster
09:29 kdhananjay joined #gluster
09:34 taved joined #gluster
09:42 kramdoss_ joined #gluster
09:48 k0nsl joined #gluster
09:48 k0nsl joined #gluster
09:49 k0nsl joined #gluster
09:49 k0nsl joined #gluster
09:58 panina joined #gluster
10:10 jkroon joined #gluster
10:17 ankitr joined #gluster
10:17 nh2 joined #gluster
10:19 kramdoss_ joined #gluster
10:21 derjohn_mob joined #gluster
10:23 rastar joined #gluster
10:30 ankitr joined #gluster
10:30 ahino joined #gluster
10:36 kdhananjay joined #gluster
10:39 prasanth joined #gluster
10:39 kramdoss_ joined #gluster
10:42 Saravanakmr joined #gluster
10:50 sona joined #gluster
10:51 bfoster joined #gluster
10:58 shyam joined #gluster
11:01 nh2 joined #gluster
11:20 poornima joined #gluster
11:22 nh2 joined #gluster
11:26 amarts joined #gluster
11:43 Saravanakmr joined #gluster
11:55 aravindavk joined #gluster
11:56 panina joined #gluster
11:56 nh2 joined #gluster
11:58 rafi REMINDER: Bug traige meeting will start in #gluster-meeting in ~2mnts
11:59 skoduri joined #gluster
12:06 shyam joined #gluster
12:11 glisignoli joined #gluster
12:12 skumar joined #gluster
12:14 social joined #gluster
12:24 taved_ joined #gluster
12:29 skumar_ joined #gluster
12:35 ccha3 someone already try nfs windows client ?
12:35 ccha3 any poor performance than nfs linux client ?
12:37 ndevos I've heard of people using it just fine, no idea about the performance though
12:38 ccha3 alot small files in a folder use case
12:38 ndevos lots of small files are mostly a pain, but as long as you do not need to do directory listings you should be good
12:39 ccha3 just create 5000 files of 30ko
12:39 ccha3 on nfs linux client around 100files/s
12:39 ccha3 and 11files/s nfs windows client
12:40 ndevos you could capture a tcpdump and compare the number of operations that each do for X number of files
12:41 ndevos maybe the windows client has some config/mount options that can help you
12:42 ndevos you can capture the network capture on the nfs-server like: tcpdump -s0 -w /var/tmp/create-files.pcap -i any tcp and port 2049
12:42 ndevos do that for both clients (and use different filenames :)
12:43 baber joined #gluster
12:43 ndevos use tshark to get some stats about it: tshark -r /var/tmp/create-files.pcap -z rpc,srt,100003,3
12:47 cornfed78 joined #gluster
12:47 cornfed78 Hi! Question about arbiter volumes: Does each volume need an arbiter volume, or is it one arbiter volume per node?
12:48 valkyr3e joined #gluster
12:50 hvisage afaik arbiters are only in the 2+1 case, and then it assumption (in my humble opinion) is that the arbiter doesn’t have data, but is only really a quorum node/brick for the volume
12:51 hvisage volumes with three or more bricks/nodes doesn’t really need a quorum brick
13:01 kotreshhr left #gluster
13:02 rafi left #gluster
13:17 atinm joined #gluster
13:27 skylar joined #gluster
13:27 nbalacha_ joined #gluster
13:36 social joined #gluster
13:46 shyam joined #gluster
13:51 ccha3 ndevos: Ok , I check pcap files. On windows nfs client there are readdir, not on linux nfs client
13:51 MrAbaddon joined #gluster
13:51 ccha3 for each file there is a readdir
13:52 ndevos ccha3: there should not be a need to do a readdir when a create is done, unless you want to display the updated contents of the directory
13:54 ccha3 no I do it as test with smallfile_cli.py
13:54 ccha3 C:\Users\Administrator\smallfile>smallfile_cli.py --operation create --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top \\192.168.47.11\vol1\test1
13:54 ccha3 on windows client
13:54 ccha3 and on linux ./smallfile_cli.py --operation create --threads 1 --file-size 30 --files 5000 --files-per-dir 10000 --top /mnt/nfs/test2
13:57 ccha3 on windows client the write is like that NFS 254 V3 CREATE Call (Reply In 53528), DH: 0x59a0c2c9/_00_1658_ Mode: UNCHECKED
13:57 ccha3 on linux client NFS 236 V3 CREATE Call (Reply In 11911), DH: 0xd463f40e/_00_1658_ Mode: EXCLUSIVE
13:58 ndevos you'll somehow need to tell windows to not do readdirs after a create, if that is possible at all
13:59 ccha3 I don't know how to do that
13:59 ccha3 on windows server 2012R2 I just install the nfs client component
13:59 ccha3 and use the nfs as UNC path
14:00 ndevos I have no idea either, I havent touched windows systems since xp
14:01 skumar_ joined #gluster
14:01 guhcampos joined #gluster
14:04 gyadav joined #gluster
14:13 glisignoli joined #gluster
14:22 gyadav_ joined #gluster
14:28 baber joined #gluster
14:36 msvbhat joined #gluster
14:37 armyriad joined #gluster
14:42 plarsen joined #gluster
14:42 farhorizon joined #gluster
14:45 Karan joined #gluster
14:53 buvanesh_kumar joined #gluster
14:57 sona joined #gluster
15:04 jkroon joined #gluster
15:04 baber joined #gluster
15:06 wushudoin joined #gluster
15:06 wushudoin joined #gluster
15:11 taved how do I check the progress of a heal? I'm using gluster volume heal <vol> info, but it really doesn't give much info
15:11 taved is there another command with more detail?
15:18 atinm joined #gluster
15:22 msvbhat joined #gluster
15:25 melliott joined #gluster
15:33 btspce joined #gluster
15:37 nh2 joined #gluster
15:38 baber joined #gluster
15:42 ksandha_ joined #gluster
15:46 derjohn_mob joined #gluster
15:55 panina joined #gluster
15:57 PatNarciso_ joined #gluster
15:57 PatNarciso_ Good morning.
15:59 _KaszpiR_ question, in gluster, what equivalent of sharding?
16:00 ndevos _KaszpiR_: Gluster has sharding. What is your understanding of sharding?
16:00 cloph it is storing the file in smaller chunks, but each junk is a single file (not spread across different bricks)
16:00 kotreshhr joined #gluster
16:00 ndevos http://blog.gluster.org/2015/12/introducing-shard-translator/
16:01 _KaszpiR_ whell, in sharding, each shard can be a standalone instance
16:01 _KaszpiR_ so distributed volume would be closest to this
16:01 _KaszpiR_ while striped you loose all if you loose one stripe chunk
16:02 cloph so what is your question then?
16:02 farhorizon joined #gluster
16:18 tru_tru joined #gluster
16:21 kotreshhr left #gluster
16:26 Gambit15 joined #gluster
16:32 melliott joined #gluster
16:34 Wizek__ joined #gluster
16:35 tallmocha joined #gluster
16:37 JoeJulian As a complete aside, I've never seen anyone loose even a single stripe/shard/chunk/whateveryoucallit. They always stay firmly attached.
16:37 JoeJulian I suppose that's pretty remarkable when you consider how fast some drives spin.
16:46 msvbhat joined #gluster
16:48 major particularly considering that on some of the badly designed circuits the electrons don't make the corners...
16:49 major "To make the merry-go-round go faster; So that everyone needs to hang on tighter; just to keep from being thrown to the wolves."
16:53 pioto joined #gluster
16:53 Karan joined #gluster
16:58 _KaszpiR_ cloph well, I was more like wondering if I get it right
17:03 _KaszpiR_ ndevos ah
17:09 Karan joined #gluster
17:19 foster joined #gluster
17:20 Wizek__ joined #gluster
17:21 rafi joined #gluster
17:45 rafi joined #gluster
17:57 vbellur joined #gluster
18:14 glisignoli joined #gluster
18:18 msvbhat joined #gluster
18:22 major will wonders never cease .. all my unproductive tasks are blocked waiting on the results of meetings .. so suddenly have time to be productive...
18:23 hvisage Yeah, meetings, the most productive way to sabotage a company…. that CIA sabotage manual O_O
18:27 rastar joined #gluster
18:36 _KaszpiR_ ugh meetings
18:36 cjyar joined #gluster
18:38 _KaszpiR_ ugh gluster striped-replicated, I guess I'll get back to distributed-replicated
18:39 major ouch .. http://www.yakimaherald.com/news/local/latest-crews-using-remote-surveying-system-to-scan-for-contamination/article_d0a2632e-34d3-11e7-a376-07fcc8458a3b.html
18:39 cjyar Running GlusterFS + heketi in Kubernetes, I'm trying to take a snapshot with "gluster snapshot snap1 vol" but it fails. The logs aren't very informative; glusterd says "[2017-05-09 18:31:38.139150] E [MSGID: 106030] [glusterd-snapshot.c:4817:gluster d_take_lvm_snapshot] 0-management: taking snapshot of the brick (/var/lib/heketi /mounts/vg_...) of device /dev/mapper/vg_... failed"
18:40 cjyar And the brick log doesn't have much to say either. Barrier on, changelog is not active, barrier off... Nothing to indicate an error.
18:40 _KaszpiR_ cjyar paste lvs
18:41 cjyar LV                                     VG                                  Attr       LSize   Pool                                Origin Data%  Meta%  Move Log Cpy%Sync Convert   brick_00ea09a5320e902b6b4939b98250dc95 vg_ae047fcaf062d2940684e3a0aee799e1 Vwi-aotz-- 100.00g tp_00ea09a5320e902b6b4939b98250dc95        100.00   brick_1128952f1a9cd82b8f340939cc7fe575 vg_ae047fcaf062d2940684e3a0aee799e1 Vwi-aotz--   1.00g tp_1128952f1a9cd
18:41 glusterbot cjyar: Vwi-aotz's karma is now -1
18:41 glusterbot cjyar: Vwi-aotz's karma is now -2
18:41 cjyar lol... Let's try that again.
18:42 cjyar brick_5e212cb5cb5bf45c6ff58ad5a24d6886 vg_ae047fcaf062d2940684e3a0aee799e1 Vwi-aotz--   1.00g tp_5e212cb5cb5bf45c6ff58ad5a24d6886        0.66
18:42 glusterbot cjyar: Vwi-aotz's karma is now -3
18:42 cjyar tp_5e212cb5cb5bf45c6ff58ad5a24d6886    vg_ae047fcaf062d2940684e3a0aee799e1 twi-aotz--   1.00g                                            0.66   0.49
18:42 glusterbot cjyar: twi-aotz's karma is now -1
18:44 ndevos ~paste | cjyar
18:44 glusterbot cjyar: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
18:46 _KaszpiR_ AERGGGGHHHH
18:50 cjyar http://termbin.com/bwug
18:51 _KaszpiR_ one of your bricks is 100% full
18:51 cjyar It's not the one I'm snapshotting, if that's relevant.
18:56 panina joined #gluster
18:56 _KaszpiR_ what about existing snapshots?
18:56 cjyar There are none.
18:58 _KaszpiR_ whats in gluster snapshot config ?
18:59 cjyar http://termbin.com/4t1k
19:00 cjyar glusterd log excerpt: http://termbin.com/r8s3 brick log excerpt: http://termbin.com/l0ya
19:01 baber joined #gluster
19:04 skylar joined #gluster
19:11 panina joined #gluster
19:14 tallmocha joined #gluster
19:25 vbellur joined #gluster
19:34 derjohn_mob joined #gluster
19:58 msvbhat joined #gluster
20:01 panina joined #gluster
20:04 tallmocha joined #gluster
20:04 gyadav_ joined #gluster
20:07 jkroon joined #gluster
20:09 shyam joined #gluster
20:17 vbellur joined #gluster
20:18 PatNarciso so, nic teaming... is it really better than bonding?
20:20 PatNarciso and while I question everything I thought I know -- is md still the ideal way to build a raid6; or is there some new lvm5000(xp plus alpha) thats better?
20:34 baber joined #gluster
20:45 guhcampos joined #gluster
20:47 nh2 joined #gluster
21:05 glisignoli joined #gluster
21:08 nh2 joined #gluster
21:15 baber joined #gluster
21:25 MrAbaddon joined #gluster
22:39 nh2 where can I find the list of glusterfs run-time dependencies? Like the python prettytable package?
22:40 JoeJulian The spec file?
22:41 nh2 JoeJulian: is that glusterfs.spec / glusterfs.spec.in?
22:41 JoeJulian yes
22:42 nh2 JoeJulian: what is that, is it an RPM specific thing?
22:42 JoeJulian You could also look at the PKGBUILD. Probably easier to read: https://git.archlinux.org/svntogit/community.git/tree/trunk/PKGBUILD?h=packages/glusterfs
22:43 glusterbot Title: svntogit/community.git - Git clone of the 'community' repository (at git.archlinux.org)
22:43 JoeJulian And yes. The spec file is for building RPMs.
22:45 _KaszpiR_ holy crap, idk why but use.cifs is enabled by default and it adds entry to samba to mout gluster, yet it fails because missing one option ;D
22:45 nh2 JoeJulian: I'm a bit wary of using downstream packager info for this, as they tend to just "fix things if somebody finds an error", especially if no easy upstream ground truth is available. For example, that pkgbuild doesn't mention prettytable or flask
22:46 nh2 JoeJulian: especially for the python deps this leads to run-time errors like the amazing "Unable to end. Error : Success" from yesterday
22:47 nh2 JoeJulian: glusterfs.spec mentions prettytable (in contrast to the pkgbuild), but it doesn't mention flask. Which seems like a bug in the spec file, as eventsdash.py imports flask
22:54 nh2 nor does it list werkzeug which is needed for `ImportError: No module named werkzeug.exceptions` for eventsdash.py
23:05 JoeJulian nh2: nope, not a bug. There's no need to list requirements that are required by requirements. That's the beauty of packaged distros.
23:07 nh2 JoeJulian: whose requirement's requirement is flask? gluseter directly import it
23:08 nh2 or does eventsdash not count as a part of gluster? I have no idea what it is
23:08 JoeJulian No idea. I misread your meaning when I read imports.
23:09 nh2 JoeJulian: I think for werkzeug you are right and that's a dep of flask, so that would be fine, but flask itself seems to be a real dep
23:12 JoeJulian ok, eventsdash seems to be in the glusterfs repo
23:15 JoeJulian werkzeug is only a logging log name.
23:22 JoeJulian flask, not sure. I'd have to look at a dependency tree.
23:56 vbellur joined #gluster
23:57 nh2 JoeJulian: right, I got confused by the fact that it is *also* a logging name; but flask itself still imports it

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary