Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 kenansulayman joined #gluster
00:11 dlambrig joined #gluster
00:16 abyss^ joined #gluster
00:18 ovaistariq joined #gluster
00:33 anil joined #gluster
00:36 ovaistariq joined #gluster
00:45 dgandhi joined #gluster
00:47 dgandhi joined #gluster
00:48 dgandhi joined #gluster
00:49 nangthang joined #gluster
00:50 dgandhi joined #gluster
00:50 dgandhi joined #gluster
00:53 amye joined #gluster
00:54 ovaistariq joined #gluster
00:56 nangthang joined #gluster
00:57 ahino joined #gluster
00:58 nangthang joined #gluster
01:02 johnmilton joined #gluster
01:17 ovaistariq joined #gluster
01:21 itisravi joined #gluster
01:26 EinstCrazy joined #gluster
01:41 penguinRaider joined #gluster
01:59 muneerse2 joined #gluster
02:15 DV joined #gluster
02:17 baojg joined #gluster
02:31 Lee1092 joined #gluster
02:37 d0nn1e joined #gluster
02:39 haomaiwa_ joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:54 syadnom if I have replica x set on a volume, with native gluster clients read from multiple bricks to improve performance?
02:56 haomaiwa_ joined #gluster
03:04 nishanth joined #gluster
03:12 skoduri joined #gluster
03:12 haomaiwa_ joined #gluster
03:15 JoeJulian No.
03:16 JoeJulian syadnom: Each fd will read from one replica. If you have millions of reads for a single file, adding replicas may help (like if you're netflix). Otherwise, spreading the load with more distribute subvolumes is more often the better choice.
03:18 syadnom JoeJulian, I'm think of a VM store.  The files are often fairly large and I'd have them replicated across 2-3 bricks.
03:19 JoeJulian @lucky the dos and donts of replication
03:19 glusterbot JoeJulian: https://joejulian.name/blog/glusterfs-replication-dos-and-donts/
03:19 JoeJulian syadnom: See that article.
03:21 syadnom reading...
03:23 DV joined #gluster
03:28 chirino joined #gluster
03:30 syadnom well, I guess that answers that yes, a native gluster client reads from anything with replicas
03:31 syadnom doesn't really clarify if this is good for a VM host..
03:37 aravindavk joined #gluster
03:38 aravindavk joined #gluster
03:48 atinm joined #gluster
03:51 RameshN joined #gluster
03:53 kanagaraj joined #gluster
04:04 itisravi joined #gluster
04:07 shubhendu joined #gluster
04:13 kdhananjay joined #gluster
04:14 sakshi joined #gluster
04:26 pur joined #gluster
04:26 jiffin joined #gluster
04:28 haomaiwa_ joined #gluster
04:28 hagarth joined #gluster
04:30 rjoseph joined #gluster
04:35 glisignoli joined #gluster
04:35 DV joined #gluster
04:37 misc joined #gluster
04:37 harish_ joined #gluster
04:44 msvbhat joined #gluster
04:45 ramky joined #gluster
04:47 cpetersen__ joined #gluster
04:51 ppai joined #gluster
04:54 honzik666 joined #gluster
04:55 Marbug joined #gluster
04:55 cpetersen_ joined #gluster
04:57 hchiramm joined #gluster
04:58 cpetersen_ joined #gluster
05:00 nehar joined #gluster
05:04 pppp joined #gluster
05:04 purpleidea joined #gluster
05:06 gem joined #gluster
05:08 karthikfff joined #gluster
05:08 hgowtham joined #gluster
05:18 Apeksha joined #gluster
05:19 DV joined #gluster
05:20 bitchecker joined #gluster
05:20 poornimag joined #gluster
05:23 ndarshan joined #gluster
05:26 kdhananjay joined #gluster
05:27 lalatenduM joined #gluster
05:29 anil joined #gluster
05:32 sac joined #gluster
05:32 ggarg joined #gluster
05:36 kshlm joined #gluster
05:43 nehar joined #gluster
05:44 poornimag joined #gluster
05:49 pppp joined #gluster
05:50 kotreshhr joined #gluster
05:50 nishanth joined #gluster
05:52 gem joined #gluster
06:01 gowtham joined #gluster
06:05 rafi joined #gluster
06:05 spalai joined #gluster
06:06 purpleidea joined #gluster
06:06 purpleidea joined #gluster
06:07 rafi joined #gluster
06:07 atalur joined #gluster
06:09 vmallika joined #gluster
06:12 Manikandan joined #gluster
06:15 DV joined #gluster
06:16 purpleidea joined #gluster
06:16 purpleidea joined #gluster
06:16 kovshenin joined #gluster
06:22 mhulsman joined #gluster
06:23 Saravanakmr joined #gluster
06:25 mhulsman1 joined #gluster
06:25 atalur joined #gluster
06:31 kshlm joined #gluster
06:35 vmallika joined #gluster
06:36 nehar joined #gluster
06:37 purpleidea joined #gluster
06:43 ashiq joined #gluster
06:50 kdhananjay joined #gluster
06:51 rafi joined #gluster
06:54 kshlm joined #gluster
06:57 jhyland joined #gluster
07:02 jobewan joined #gluster
07:03 unlaudable joined #gluster
07:11 itisravi joined #gluster
07:12 itisravi joined #gluster
07:12 itisravi joined #gluster
07:13 SOLDIERz joined #gluster
07:14 jtux joined #gluster
07:16 hchiramm joined #gluster
07:18 spalai joined #gluster
07:38 edong23 joined #gluster
07:38 [Enrico] joined #gluster
07:55 F2Knight joined #gluster
07:55 F2Knight_ joined #gluster
07:57 F2Knight joined #gluster
08:02 nangthang joined #gluster
08:15 taavida1 how to resolve when trusted.gfid of a file is different between members  (replicated volume)?
08:17 kdhananjay taavida1: is it a file or a directory?
08:18 robb_nl joined #gluster
08:20 ivan_rossi joined #gluster
08:25 karnan joined #gluster
08:38 mhulsman joined #gluster
08:39 harish_ joined #gluster
08:41 taavida1 kdhananjay: it's a file
08:43 ctria joined #gluster
08:45 kdhananjay taavida1: could you share output of `gluster volume info`?
08:54 Akee joined #gluster
08:57 karthikfff joined #gluster
08:59 rafi joined #gluster
09:00 taavida1 kdhananjay: http://fpaste.org/333644/45708202/
09:00 glusterbot Title: #333644 Fedora Project Pastebin (at fpaste.org)
09:03 jri joined #gluster
09:13 Wizek joined #gluster
09:13 Wizek_ joined #gluster
09:17 karthikfff joined #gluster
09:24 baojg joined #gluster
09:24 deniszh joined #gluster
09:28 Wizek__ joined #gluster
09:29 haomaiwa_ joined #gluster
09:30 jiffin1 joined #gluster
09:31 karthik__ joined #gluster
09:31 karthik__ left #gluster
09:39 karthik__ joined #gluster
09:39 rafi1 joined #gluster
09:44 kdhananjay taavida1: ok so you need to choose which copy you want to preserve and which copy you want to discard, first off.
09:46 karthik__ joined #gluster
09:46 kdhananjay taavida1: you can go to the individual bricks, examine the contents of the respective copies of the file
09:46 kdhananjay taavida1: and make a choice.
09:47 kdhananjay taavida1: i will call this the good copy from now on.
09:47 kdhananjay taavida1: and the other the bad copy
09:47 kdhananjay taavida1: once bad and good copy are chosen, go to the brick where the bad copy is, get its gfid.
09:47 kdhananjay taavida1: specifically check if it has any hardlinks
09:48 kdhananjay taavida1: in the bad copy containing brick, cd into .glusterfs/<first-two-chars-of-bad-copies-gfid>/<next-two-chars-of-bad-copies-gfid>
09:49 kdhananjay taavida1: for example if the gfid of the bad copy is this: 3c4ccd01-446c-4a67-ad33-dfccd72b2454 ...
09:49 kdhananjay taavida1: ... cd into <brick-path-on-bad-copy>/.glusterfs/3c/4c
09:49 kenansulayman joined #gluster
09:50 arcolife joined #gluster
09:50 kdhananjay taavida1: and there delete the gfid-named file.
09:50 kdhananjay taavida1: in the example i gave this would correspond to the file named 3c4ccd01-446c-4a67-ad33-dfccd72b2454
09:50 haomaiwa_ joined #gluster
09:51 kdhananjay taavida1: once done, delete any more hardlinks your application may have created on bad copy
09:51 kdhananjay taavida1: then finally delete the bad copy from its actual parent directoru
09:51 kdhananjay *directory
09:52 kdhananjay taavida1: then go to a FUSE mount point of your volume, and do ls <path-to-the-file-with-gfid-mismatch>
09:52 kdhananjay taavida1: check if it reappears on the bad brick now with the correct gfid.
09:52 kdhananjay taavida1: that's about it.
09:52 kenansul- joined #gluster
09:53 haomaiwang joined #gluster
09:56 vmallika joined #gluster
10:10 kenansul- joined #gluster
10:13 kenansul| joined #gluster
10:19 robb_nl joined #gluster
10:25 cholcombe joined #gluster
10:35 social joined #gluster
10:35 nishanth joined #gluster
10:37 haomaiwang joined #gluster
10:39 ira joined #gluster
10:57 Saravanakmr joined #gluster
11:00 mattjnz joined #gluster
11:02 mattjnz Hey all - I've got a gluster node that ran out of disk space (we noticed trying to do a config change and got a commit failed message for that node) - on restart this is now seemingly corrupted as it is exiting quickly with a lot of errors and seems to think it's a fresh install. See http://pastebin.com/gqXkyJ5X for logs
11:02 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:02 Gnomethrower joined #gluster
11:03 mattjnz http://fpaste.org/333716/45708942/ as requested
11:03 glusterbot Title: #333716 Fedora Project Pastebin (at fpaste.org)
11:04 Gnomethrower hey everyone
11:04 mattjnz Any ideas much appreciated as it's a very urgent issue
11:07 chirino_m joined #gluster
11:08 v12aml joined #gluster
11:15 mattjnz Managed to get first issue solved - glusterd.info file was truncated. Now still getting heaps of brick errors and management volume not being recognized
11:24 haomaiwang joined #gluster
11:28 shyam joined #gluster
11:30 johnmilton joined #gluster
11:30 mattjnz Updated log: http://fpaste.org/333732/57090802/
11:30 glusterbot Title: #333732 Fedora Project Pastebin (at fpaste.org)
11:47 anil joined #gluster
12:01 jiffin1 joined #gluster
12:10 haomaiwa_ joined #gluster
12:13 mattjnz Just started a completely new instance, following brick recovery procedure, and same error after syncing volumes - gluster daemon will not restart, [glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore
12:14 mattjnz Plus a lot of  [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 et al
12:18 Javezim joined #gluster
12:18 Javezim Hey All
12:19 Javezim Can someone please explain to me the benefits and I suppose scenarios where arbiter replica would be a good idea for Gluster
12:20 post-factum arbiter is good for HA and avoiding split-brains
12:28 Javezim And Ganeha_NFS is a way to share out the gluster volumes
12:28 Javezim Does anyone have any good setup documents on Ganesha setup?
12:31 gem joined #gluster
12:33 baojg joined #gluster
12:33 post-factum config examples included into ganesha docs are pretty self-explanatory. feel free, however, to ask if you need to clarify some of them
12:46 jiffin Javezim: http://gluster.readthedocs.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Intergration/
12:46 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.org)
12:57 kotreshhr left #gluster
13:01 nangthang joined #gluster
13:02 kenhui joined #gluster
13:02 the-me joined #gluster
13:06 Apeksha joined #gluster
13:09 haomaiwa_ joined #gluster
13:23 karnan joined #gluster
13:24 karnan joined #gluster
13:25 chirino joined #gluster
13:30 Javezim joined #gluster
13:30 Javezim can someone help me get samba-vfs-glusterfs working please
13:30 Javezim Can't understand why it isn't
13:30 haomaiwang joined #gluster
13:30 Javezim have installed Gluster 3.7, set it all up
13:31 Javezim https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7
13:31 glusterbot Title: samba-vfs-glusterfs-3.7 : André Bauer (at launchpad.net)
13:31 Javezim Followed the above guide
13:31 Javezim But when I try access the share i setup I get these errors
13:31 Javezim http://fpaste.org/333795/57098313/
13:31 glusterbot Title: #333795 Fedora Project Pastebin (at fpaste.org)
13:32 plarsen joined #gluster
13:33 Javezim Am using Ubuntu 14.04
13:33 post-factum Javezim: reportedly, you have got no glusterfs module for samba installed
13:34 Javezim Hmm how does one get this on Ubuntu 14.04.3
13:34 Javezim I mean I tried this guide -> https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7
13:34 glusterbot Title: samba-vfs-glusterfs-3.7 : André Bauer (at launchpad.net)
13:34 Javezim apparently it didn't do it
13:35 pur joined #gluster
13:37 post-factum have you installed glusterfs-enabled samba package?
13:38 hackman joined #gluster
13:39 Javezim Hmm no where is this from?
13:41 post-factum hmm, you have provided link to ppa above
13:41 post-factum use that ppa :)
13:42 Javezim post-factum Nothing called Glusterfs-enabled in there
13:43 post-factum PPA description
13:43 post-factum Samba VFS modules with GlusterFS 3.7 VFS module.
13:44 Javezim post-factum, Yeah but still coming back with the error
13:44 Javezim Installed Glusterfs-common
13:44 Javezim But no go
13:44 post-factum you have to install samba from that ppa
13:51 Javezim Done and still failing
13:53 post-factum check if mentioned in log *.so file really exists in your system
13:55 unclemarc joined #gluster
13:55 Javezim Hmm it does not
13:56 post-factum so it seems you have installed wrong package or something
13:57 dlambrig joined #gluster
13:59 arcolife joined #gluster
14:00 arcolife joined #gluster
14:04 Gnomethrower joined #gluster
14:04 shaunm joined #gluster
14:06 Javezim Omg
14:06 Javezim So I had to find this gluster.so on another gluster on another gluster
14:06 Javezim copy it to my one
14:06 Javezim and wallah
14:06 post-factum nope
14:06 Javezim it works
14:06 Javezim How the fuck
14:09 bowhunter joined #gluster
14:09 Javezim That just racked my brain
14:13 haomaiwa_ joined #gluster
14:18 shyam joined #gluster
14:24 shyam1 joined #gluster
14:27 rwheeler joined #gluster
14:32 ahino joined #gluster
14:32 vmallika joined #gluster
14:40 farhorizon joined #gluster
14:44 skylar joined #gluster
14:48 Gnomethrower joined #gluster
14:51 jhyland joined #gluster
14:52 jhyland joined #gluster
14:53 theron joined #gluster
14:59 Javezim FYI I just created a virtual environment of Gluster and re-did the setup and again the gluster.so was missing
14:59 Javezim Can't seem to find a report of the bug
14:59 Javezim are Gluster aware of this?
15:00 post-factum it is rather the question to samba package maintainer than to gluster devs
15:00 Javezim this time i just did it via the normal ppa:gluster/glustefs-3.7
15:00 Javezim which afaik is maintained by gluster
15:01 haomaiwa_ joined #gluster
15:05 rwheeler_ joined #gluster
15:05 spalai joined #gluster
15:05 hamiller joined #gluster
15:08 spalai left #gluster
15:11 theron_ joined #gluster
15:21 theron joined #gluster
15:27 sebamontini joined #gluster
15:27 [Enrico] joined #gluster
15:28 coredump joined #gluster
15:33 coredump joined #gluster
15:33 sebamontini joined #gluster
15:38 JoeJulian @later tell Javezim The glusterfs.so should be in your downstream distro's samba-vfs-modules package. https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1281493
15:38 glusterbot JoeJulian: The operation succeeded.
15:48 kenhui joined #gluster
15:53 jhyland joined #gluster
15:56 d0nn1e joined #gluster
15:57 shyam1 joined #gluster
16:00 nishanth joined #gluster
16:01 haomaiwang joined #gluster
16:01 skylar joined #gluster
16:06 voobscout joined #gluster
16:08 jwd joined #gluster
16:10 farhorizon joined #gluster
16:17 amye joined #gluster
16:18 jiffin joined #gluster
16:26 squizzi joined #gluster
16:29 jiffin joined #gluster
16:33 bennyturns joined #gluster
16:33 shaunm joined #gluster
16:35 jiffin joined #gluster
16:42 rafi joined #gluster
16:43 ayma joined #gluster
16:48 jiffin joined #gluster
16:51 voobscout joined #gluster
16:54 raghu joined #gluster
16:54 shubhendu joined #gluster
17:00 ayma1 joined #gluster
17:00 jobewan joined #gluster
17:02 atalur joined #gluster
17:03 ayma joined #gluster
17:08 Chinorro joined #gluster
17:10 ramky joined #gluster
17:11 theron joined #gluster
17:11 muneerse joined #gluster
17:18 sagarhani joined #gluster
17:22 dlambrig joined #gluster
17:23 shyam joined #gluster
17:25 ggarg joined #gluster
17:28 jiffin joined #gluster
17:32 sebamontini joined #gluster
17:35 ivan_rossi left #gluster
17:39 calavera joined #gluster
17:46 mhulsman joined #gluster
17:51 coredump joined #gluster
17:53 dlambrig joined #gluster
17:58 squizzi joined #gluster
18:14 kenhui joined #gluster
18:31 portante joined #gluster
18:44 XpineX joined #gluster
18:45 ovaistariq joined #gluster
18:50 valkyr1e joined #gluster
18:50 ovaistar_ joined #gluster
18:50 rwheeler joined #gluster
19:00 farhorizon joined #gluster
19:07 farhorizon joined #gluster
19:10 hamiller joined #gluster
19:14 atalur joined #gluster
19:17 ovaistariq joined #gluster
19:24 B21956 joined #gluster
19:35 coredump joined #gluster
19:42 sebamontini joined #gluster
19:42 ovaistariq joined #gluster
20:00 ovaistariq joined #gluster
20:00 farhorizon joined #gluster
20:01 gbox joined #gluster
20:01 rafi1 joined #gluster
20:04 sebamontini joined #gluster
20:04 farhorizon joined #gluster
20:32 farhorizon joined #gluster
20:32 squizzi joined #gluster
21:02 kenhui joined #gluster
21:02 kenhui1 joined #gluster
21:03 skylar joined #gluster
21:10 johnmilton joined #gluster
21:11 johnmilton joined #gluster
21:13 johnmilton joined #gluster
21:17 rafi joined #gluster
21:20 edong23 joined #gluster
21:30 sebamontini joined #gluster
21:36 chirino joined #gluster
21:43 shaunm joined #gluster
21:43 mpietersen joined #gluster
21:44 amye joined #gluster
21:44 atrius_ joined #gluster
21:45 robb_nl joined #gluster
21:45 farhorizon joined #gluster
21:46 farhorizon joined #gluster
21:52 ovaistariq joined #gluster
21:55 kovshenin joined #gluster
22:26 ovaistariq joined #gluster
22:31 ovaistariq joined #gluster
23:01 farhorizon joined #gluster
23:09 shyam joined #gluster
23:14 farhorizon joined #gluster
23:25 squizzi joined #gluster
23:26 jhyland joined #gluster
23:35 jobewan joined #gluster
23:41 CyrilPeponnet @JoeJulian have experienced some Initramfs unpacking failed: junk in compressed archive at boot up using libgfapi ?
23:41 JoeJulian nOPE
23:42 CyrilPeponnet Or stuff like FAT-fs (sda): error, fat_get_cluster: invalid cluster chain (i_pos 262435) on boot :/
23:43 CyrilPeponnet is there some debug regarding libgfapi somewhereh ?
23:45 JoeJulian Well, it's diagnostics.client-log-level output, but where it's put might depend on the application. Otherwise, look in /var/log/glusterfs.
23:45 ayma seems like gate-manila-tempest-dsvm-neutron-multibackend is failing?  looking at the logs it seems to say
23:45 ayma 2016-03-04 21:30:48.532 | 2016-03-04 21:30:48.488 | ERROR: InvocationError: '/bin/bash tools/pretty_tox.sh manila_tempest_tests.tests.api --concurrency=20' 2016-03-04 21:30:48.532 | 2016-03-04 21:30:48.491 | ___________________________________ summary ____________________________________ 2016-03-04 21:30:48.532 | 2016-03-04 21:30:48.494 | ERROR:   all-plugin: commands failed
23:46 ayma is this a known issue?
23:48 JoeJulian I didn't do it!
23:49 JoeJulian Any code contribution I've made to openstack has only been to make error messages actually tell you something actionable.
23:50 ayma okay thanks
23:51 JoeJulian Probably better asked in #openstack-neutron
23:53 CyrilPeponnet what means [client-rpc-fops.c:1298:client3_3_removexattr_cbk] 0-ansos_380-client-0: remote operation failed [No data available] @JoeJulian
23:54 CyrilPeponnet (if you have any idea)
23:54 JoeJulian That means that on your first brick for volume ansos_380, an attempt to remove an extended attribute from something failed. More info may be available in the brick log of that first brick.
23:55 CyrilPeponnet somehting like SETATTR /mvqa12/images/flexibed/vm_6/disk0.snapshot.qcow2 (95d998dd-0d32-4dc3-b105-df188e5e7856) ==> (Operation not permitted) ?
23:55 JoeJulian Most of the time when I've seen that, it
23:55 JoeJulian it's when a file has been deleted.
23:56 JoeJulian And yes, that looks like a good candidate for correlation.
23:56 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary