Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 sman joined #gluster
00:02 portante joined #gluster
00:03 atrius joined #gluster
00:03 johnnytran joined #gluster
00:05 buhman joined #gluster
00:07 mariusp joined #gluster
00:20 badone joined #gluster
00:32 russoisraeli JoeJulian - i believe it's your blog I read regarding distributed vs. striped volumes. Could you please tell me if you know?
00:34 bennyturns joined #gluster
00:36 mariusp joined #gluster
00:42 JoeJulian russoisraeli: yep, that's my blog.
00:43 JoeJulian russoisraeli: no, it is not possible to convert to or from stripe.
00:45 JoeJulian russoisraeli: And vm images are generally not that random. Stripe hasn't been shown to provide any performance benefits to VM images so unless you're going to sell images that exceed your brick sizes (and I would strongly recommend against that), stripe is not likely to be beneficial.
01:09 mariusp joined #gluster
01:27 daMaestro joined #gluster
01:28 mariusp joined #gluster
01:48 mariusp joined #gluster
02:08 mariusp joined #gluster
02:12 plarsen joined #gluster
02:22 meghanam joined #gluster
02:26 meghanam joined #gluster
02:26 meghanam_ joined #gluster
02:27 mariusp joined #gluster
02:36 sjohnsen joined #gluster
02:40 hflai joined #gluster
02:48 mariusp joined #gluster
03:04 bala joined #gluster
03:18 mariusp joined #gluster
03:27 hightower4 joined #gluster
03:38 mariusp joined #gluster
03:44 toecutter joined #gluster
03:55 badone joined #gluster
03:59 mariusp joined #gluster
04:28 mariusp joined #gluster
04:56 elico joined #gluster
04:59 mariusp joined #gluster
05:03 haomaiwa_ joined #gluster
05:19 mariusp joined #gluster
05:34 MacWinner joined #gluster
05:38 mariusp joined #gluster
05:58 mariusp joined #gluster
06:03 meghanam joined #gluster
06:03 meghanam_ joined #gluster
06:16 glusterbot New news from newglusterbugs: [Bug 1161903] Different client can not "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161903>
06:18 mariusp joined #gluster
06:37 mariusp joined #gluster
06:52 anoopcs joined #gluster
06:59 mariusp joined #gluster
07:15 mator joined #gluster
07:28 mariusp joined #gluster
07:28 meghanam joined #gluster
07:28 meghanam_ joined #gluster
07:47 glusterbot New news from newglusterbugs: [Bug 1161903] Different client can not execute "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161903>
07:47 dataio JoeJulian: The brick is separate from the os. It's a mdadm raid6 set so if i would do a reinstall i will only touch the os. But i ment in terms of lossing metadata or something..
07:48 mariusp joined #gluster
08:31 mariusp joined #gluster
08:36 SOLDIERz joined #gluster
08:41 andreask joined #gluster
09:13 haomaiwa_ joined #gluster
09:25 haomai___ joined #gluster
09:45 ktogias joined #gluster
09:50 ktogias Hi all. After rebooting some bricks of a live gluster repicated distributed volume with 6 bricks (2x3), gluster initiated a rebalance and started to lock the bricks. After leaving it all night and then restarting glusterfsd on all nodes I was able to get status and heal info. The heal info show split brain on root (/) and other directories on two nodes.
09:52 ktogias I am looking for documentation on resolving the split brain, and have found some, but now 1 brick (one of the two with spilt brain) is again locked and looking at top glusterfsd seems doing some work...
09:53 ktogias What should I do? Leave it finish its job? Stop it in order to try manualy resolving the split brain?
09:54 ktogias A side effect of this i that I fail to mount the volume from other machines. I get an io-error on the mount point.
09:55 ktogias Any help, or suggestions? I am quite new to gluster, so anything from your experience may be valuable.
09:55 ktogias Thanks
10:11 badone joined #gluster
10:11 Gorian joined #gluster
10:14 ktogias Ok... I managed to detect the log file where gluster writes... /var/log/glusterfs/bricks/.glusterfs-data.log gets filled with messages like: E [marker.c:2542:marker_removexattr_cbk] 0-data-marker: No data available occurred while creating symlinks
10:14 ktogias and I [server-rpc-fops.c:693:server_removexattr_cbk] 0-data-server: 8721417: REMOVEXATTR of key security.ima ==> (No data available)
10:14 ktogias Is it self - healing?
10:21 haomaiwa_ joined #gluster
10:24 mariusp joined #gluster
10:35 Gorian joined #gluster
10:54 haomai___ joined #gluster
11:03 mator joined #gluster
11:11 diegows joined #gluster
11:17 glusterbot New news from newglusterbugs: [Bug 764245] [FEAT] glusterfs requires CAP_SYS_ADMIN capability for "trusted" extended attributes - container unfriendly <https://bugzilla.redhat.com/show_bug.cgi?id=764245> || [Bug 763999] apt-get fails to work inside a Proxmox container: Value too large for defined data type <https://bugzilla.redhat.com/show_bug.cgi?id=763999> || [Bug 764034] [FEAT] Add login/password authentication to CLI <
11:31 glusterbot New news from resolvedglusterbugs: [Bug 764619] Providing native systemd service file for glusterd and glusterfsd <https://bugzilla.redhat.com/show_bug.cgi?id=764619>
11:43 SOLDIERz joined #gluster
11:47 glusterbot New news from newglusterbugs: [Bug 764624] Dependences for glusters(>=3.2.x) debian/ubuntu package. <https://bugzilla.redhat.com/show_bug.cgi?id=764624> || [Bug 764850] [FEAT] multi-homed access for fuse mounts on distinct networks is no longer possible <https://bugzilla.redhat.com/show_bug.cgi?id=764850>
11:57 russoisraeli joined #gluster
12:02 hightower4 joined #gluster
12:31 topshare joined #gluster
12:54 LebedevRI joined #gluster
12:56 topshare joined #gluster
12:57 ctria joined #gluster
13:02 topshare joined #gluster
13:02 adil452100 joined #gluster
13:05 diegows joined #gluster
13:06 adil452100 Hi all .. I've 3 proxmox node .. I've installed glusterfs-server using one ssd disk dedicated to openvz containers on each node ..   i've mounted the volume using the proxmox gui  .. but the creation of a container take several minutes ..   recap : 3 proxmox nodes + 3 glusterfs-server (one in each node) + replica 2 + 4 bricks (2bricks in each node) + default configuration ..   problem : long time to create a container ..   net
13:06 adil452100 Any help/idea to tweak openvz or glusterfs for performance ? ..   when using a sata hdd the creation of a container is extremly fast ..   i'm under test actually with no vms hosted yet ..   each proxmox node is 128gb ram
13:20 ktogias I managed to manualy resolve the split-brain'd directories... And everything seems ok now
13:22 meghanam joined #gluster
13:22 meghanam_ joined #gluster
13:58 harish joined #gluster
14:16 soumya__ joined #gluster
14:26 haomaiwa_ joined #gluster
14:48 rotbeard joined #gluster
14:56 theron joined #gluster
15:23 uebera|| joined #gluster
15:24 Mzoorikh joined #gluster
15:24 Mzoorikh left #gluster
16:07 elico joined #gluster
16:12 plarsen joined #gluster
17:20 meghanam joined #gluster
17:20 meghanam_ joined #gluster
18:09 Gorian joined #gluster
18:44 elico joined #gluster
18:47 johndescs_ joined #gluster
19:16 JoeJulian dataio: Re-installing an operating system will not affect the filesystem of the brick (unless you accidentally format it during installation). To be safe, consider whether it's a good idea to just unplug the disk(s) that make up the brick during re-install.
19:29 toecutter joined #gluster
19:42 bennyturns joined #gluster
19:46 toecutter joined #gluster
19:48 the-me semiosis: ping
19:48 glusterbot the-me: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
19:49 the-me ... fu I do naked pings whenever I want :p
19:54 dataio JoeJulian: Good to know. Thank you for the tips :)
19:59 toecutter joined #gluster
20:37 toecutter joined #gluster
21:17 purpleidea the-me: lol, and in this case you'll most likely be ignored, unless that person really wanted a naked ping from you :P
21:17 * purpleidea loves JoeJulian's bot
21:32 the-me purpleidea: I know him ;)
21:37 purpleidea :)
21:46 n-st joined #gluster
21:59 coredump joined #gluster
22:01 plarsen joined #gluster
22:50 hightower4 joined #gluster
23:10 johndescs joined #gluster
23:20 glusterbot New news from newglusterbugs: [Bug 958325] For Gluster-Swift integration, enhance quota translator to return count of objects as well as total size <https://bugzilla.redhat.com/show_bug.cgi?id=958325> || [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075417> || [Bug 1083963] Dist-geo-rep : after renames on master, there are more number of files on slave than

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary