Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 tryggvil joined #gluster
00:10 StarBeast joined #gluster
00:13 robo joined #gluster
00:13 jag3773 joined #gluster
00:32 jporterfield joined #gluster
00:58 jporterfield joined #gluster
01:10 kevein joined #gluster
01:12 nightwalk joined #gluster
01:16 bagira joined #gluster
01:17 bagira i am toying around with the idea of using disks from multiple storage machines across the network bound into a single filesystem for a monolithic approach, and then mounting that as a /home on a central system.  What would be the best way to go about doing this as I am under the impression that sharing an nfs of a shared nfs is a generally bad idea.  Is glusterfs able to do this?
01:17 DV joined #gluster
01:18 _chjohnsthome so a distributed set of bricks, no replica?
01:19 bagira e.g., say i've got a small linux installation on 8 machines on /dev/sda1 on all of them, and each of them have a blank /dev/sda2 comprising the remainder of their disk space.  I want to combine all of the /dev/sda2s in that setup into one gigantic mount at /home on a central linux machine.
01:21 bagira however if one of them goes down i dont want to lose everything just what's on that disk (e.g. it needs to be scalable so i can stuff in without redoing the whole setup)
01:24 bagira basically im just asking if glusterfs is the droid im looking for, if so, i can do the research to build it
01:28 bagira looking at the glusterfs documentation it looks like I want no replication and want to disable distribution across servers to create one volume from many bricks
01:33 _chjohnsthome sure 8 volumes/bricks can be put into anything you want
01:33 _chjohnsthome but if you want reliablity you need to replicate the brick
01:34 _chjohnsthome sure you can stripe across all 8 of the machines, but if you lose one of the machines you lose that data
01:34 _chjohnsthome *actually not sure on the total number of bricks supported in a stripe though
01:49 nightwalk joined #gluster
01:59 nueces joined #gluster
02:15 jporterfield joined #gluster
02:21 _chjohnsthome just wget
02:21 _chjohnsthome and change end the graph query to &format=json
02:22 dkorzhevin joined #gluster
02:22 _chjohnsthome oops wrong chat room
02:35 jporterfield joined #gluster
02:39 DV joined #gluster
02:49 saurabh joined #gluster
02:52 vshankar joined #gluster
02:55 nueces joined #gluster
02:59 VeggieMeat joined #gluster
03:07 raghug joined #gluster
03:10 jporterfield joined #gluster
03:15 jporterfield joined #gluster
03:24 bharata-rao joined #gluster
03:27 DV joined #gluster
03:35 shubhendu joined #gluster
03:38 awheeler joined #gluster
03:52 sgowda joined #gluster
03:52 itisravi joined #gluster
04:04 jporterfield joined #gluster
04:04 kanagaraj joined #gluster
04:05 ndarshan joined #gluster
04:13 shubhendu joined #gluster
04:16 raghug joined #gluster
04:16 kshlm joined #gluster
04:21 shruti joined #gluster
04:22 jporterfield joined #gluster
04:24 ababu joined #gluster
04:25 dusmant joined #gluster
04:25 ppai joined #gluster
04:27 ababu joined #gluster
04:29 RameshN joined #gluster
04:34 spandit joined #gluster
04:34 davinder joined #gluster
04:34 raghug joined #gluster
04:43 rastar joined #gluster
04:48 CheRi joined #gluster
04:53 aravindavk joined #gluster
04:55 hagarth joined #gluster
04:58 bala joined #gluster
05:03 raghug joined #gluster
05:10 raghug joined #gluster
05:11 bulde joined #gluster
05:17 jporterfield joined #gluster
05:21 lalatenduM joined #gluster
05:22 satheesh1 joined #gluster
05:23 lala_ joined #gluster
05:23 psharma joined #gluster
05:24 raghu joined #gluster
05:38 rjoseph joined #gluster
05:47 anands joined #gluster
05:47 ajha joined #gluster
05:52 vshankar joined #gluster
05:55 shylesh joined #gluster
05:57 nshaikh joined #gluster
06:03 jporterfield joined #gluster
06:08 jtux joined #gluster
06:25 raghug joined #gluster
06:29 vpshastry joined #gluster
06:31 raghug joined #gluster
06:32 aravindavk joined #gluster
06:33 vimal joined #gluster
06:37 rgustafs joined #gluster
06:39 fidevo joined #gluster
06:40 barnim joined #gluster
06:49 dusmant joined #gluster
06:50 Guest44716 joined #gluster
06:50 Guest44716 hi
06:50 glusterbot Guest44716: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:53 rwheeler joined #gluster
06:55 Guest44716 left #gluster
06:58 root03 joined #gluster
06:58 eseyman joined #gluster
06:58 ngoswami joined #gluster
07:00 CheRi joined #gluster
07:03 root03 Glusterfs version 3.2.2 :  I have a gluster volume in which one of the peer node had crashed, and it was removed long ago prior to my service here. I see from volume info that the crashed node is still listed as one of the peers and the bricks are also in place. I would like to detach this node and its bricks and reblances the volume with other 3 peers. But I am unable to do so. Here are my setps:
07:03 root03 1.
07:03 root03 gluster volume info
07:03 root03 Volume Name: test-volume
07:03 root03 Type: Distributed-Replicate
07:03 root03 Status: Started
07:03 root03 Number of Bricks: 4 x 2 = 8
07:03 root03 Transport-type: tcp
07:03 root03 Bricks:
07:03 root03 Brick1: dbstore1r293:/datastore1
07:03 root03 Brick2: dbstore2r293:/datastore1
07:03 root03 Brick3: dbstore3r294:/datastore1
07:03 root03 Brick4: dbstore4r294:/datastore1
07:03 root03 Brick5: dbstore1r293:/datastore2
07:06 RameshN joined #gluster
07:09 ctria joined #gluster
07:10 root03 peer status shows  this ---   http://pastebin.com/yZE4UFi7
07:10 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
07:10 ricky-ticky joined #gluster
07:11 root03 @paste
07:11 glusterbot root03: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
07:12 root03 peer status shows the following -- http://paste.ubuntu.com/6057932/
07:12 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
07:13 root03 Title: Peer Status of my Gluster Nodes (at http://paste.ubuntu.com/6057932)
07:13 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
07:15 shylesh joined #gluster
07:16 root03 any help?
07:16 root03 only glusterbot seems to chat with me :)
07:19 dusmant joined #gluster
07:21 mohankumar__ joined #gluster
07:21 hybrid512 joined #gluster
07:26 ndarshan joined #gluster
07:32 DV joined #gluster
07:42 hchiramm_ joined #gluster
07:45 ProT-0-TypE joined #gluster
07:46 ProT-0-TypE joined #gluster
07:47 ProT-0-TypE joined #gluster
07:50 mgebbe_ joined #gluster
07:57 jtux joined #gluster
07:58 mooperd joined #gluster
08:01 DV joined #gluster
08:12 hchiramm_ joined #gluster
08:14 ricky-ticky joined #gluster
08:16 shubhendu joined #gluster
08:19 ricky-ticky joined #gluster
08:24 hybrid5122 joined #gluster
08:27 ricky-ticky joined #gluster
08:29 satheesh1 joined #gluster
08:31 meghanam joined #gluster
08:31 meghanam_ joined #gluster
08:37 atrius joined #gluster
08:38 ricky-ticky joined #gluster
08:41 hybrid5121 joined #gluster
08:54 RameshN joined #gluster
08:56 ricky-ticky joined #gluster
09:01 msciciel joined #gluster
09:03 satheesh1 joined #gluster
09:04 root03 left #gluster
09:05 bulde joined #gluster
09:14 bala joined #gluster
09:15 glusterbot New news from newglusterbugs: [Bug 1003800] dist-geo-rep: running "make" failes to sync files to the slave <http://goo.gl/IFrXeR>
09:17 msciciel joined #gluster
09:20 bulde1 joined #gluster
09:25 shubhendu joined #gluster
09:26 jporterfield joined #gluster
09:29 kanagaraj joined #gluster
09:31 jtux joined #gluster
09:41 tryggvil joined #gluster
09:45 glusterbot New news from newglusterbugs: [Bug 1003805] Dist-geo-rep : geo-rep failed to sync few of the hardlinks to one of the slaves, when there are many. <http://goo.gl/gZZ9Li> || [Bug 1003803] Dist-geo-rep : worker process dies and started again frequently <http://goo.gl/jH6zuw> || [Bug 1003807] Dist-geo-rep: If a single slave node to which all the master nodes do aux mount goes down , all geo-rep status goes to Faulty. <http://g
09:52 manik joined #gluster
09:56 tryggvil joined #gluster
09:59 dusmant joined #gluster
10:04 shubhendu joined #gluster
10:06 nightwalk joined #gluster
10:06 kanagaraj joined #gluster
10:08 StarBeast joined #gluster
10:10 bulde joined #gluster
10:12 dneary joined #gluster
10:14 rjoseph joined #gluster
10:15 ndarshan joined #gluster
10:15 glusterbot New news from newglusterbugs: [Bug 994353] Dist-geo-rep: Worker in one of the master node keeps crashing and the session in that node is always faulty. <http://goo.gl/tOscXf> || [Bug 994461] Remove options that are deprecated in Big Bend (Geo-replication commands in particular) <http://goo.gl/ykSnXk> || [Bug 990900] Dist-geo-rep : imaster in cascaded geo-rep fails to do first xsync crawl and consequently fail to sync files
10:16 shruti joined #gluster
10:23 ababu joined #gluster
10:28 aravindavk joined #gluster
10:29 edward1 joined #gluster
10:34 tziOm joined #gluster
10:34 manik joined #gluster
10:42 satheesh1 joined #gluster
10:43 ricky-ticky joined #gluster
10:46 glusterbot New news from newglusterbugs: [Bug 1003842] AFR: Mount crash lead to data corruption when a brick is down <http://goo.gl/OLH54m>
10:56 jporterfield joined #gluster
11:18 DV joined #gluster
11:22 hagarth joined #gluster
11:24 RameshN joined #gluster
11:26 CheRi_ joined #gluster
11:28 duerF joined #gluster
11:33 RedShift joined #gluster
11:34 failshell joined #gluster
11:35 tryggvil joined #gluster
11:35 failshell joined #gluster
11:53 bulde joined #gluster
11:56 chirino joined #gluster
11:56 glusterbot New news from resolvedglusterbugs: [Bug 921408] [RHEV-RHS] Failed to re-run a VM which was in non-responsive state <http://goo.gl/g9hST7>
11:59 andreask joined #gluster
12:01 kanagaraj joined #gluster
12:06 shastri joined #gluster
12:07 sgowda joined #gluster
12:08 mooperd joined #gluster
12:11 jdarcy joined #gluster
12:24 sprachgenerator joined #gluster
12:30 B21956 joined #gluster
12:38 raghug joined #gluster
12:40 rcheleguini joined #gluster
12:41 jclift joined #gluster
12:42 bulde joined #gluster
12:44 manik joined #gluster
12:46 RameshN joined #gluster
12:51 robo joined #gluster
12:52 shruti joined #gluster
12:59 hagarth joined #gluster
13:02 redragon_ joined #gluster
13:02 kanagaraj joined #gluster
13:02 redragon_ so had a brick fail, replaced the drives, replication seems to be slow, are there some tunables around this?
13:03 davinder2 joined #gluster
13:07 lpabon joined #gluster
13:07 failshel_ joined #gluster
13:18 bulde joined #gluster
13:19 masterzen_ joined #gluster
13:24 semiosis redragon_: change the self heal algorithm
13:24 dewey joined #gluster
13:28 redragon_ have a doc?
13:33 sgowda joined #gluster
13:40 saurabh joined #gluster
13:42 hybrid512 joined #gluster
13:48 bennyturns joined #gluster
13:49 chirino joined #gluster
13:54 zaitcev joined #gluster
13:56 tqrst joined #gluster
13:56 jbrooks joined #gluster
13:59 tqrst my 3.4.0 client on an ubuntu machine has been disconnecting seemingly randomly every couple of days. The logs show "W [glusterfsd.c:1002:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f78baab3ccd] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f78bad86e9a] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xc5) [0x7f78bb871e85]))) 0-: received signum (15), shutting down"
13:59 tqrst this is rather odd given that I haven't explicitly called umount or killed the processes myself. Any ideas what might be going on? I've only seen this happen on my ubuntu workstation; everything works fine on my centos clients.
14:00 robos joined #gluster
14:00 dmojoryder is there a downside to using fopen-keep-cache when mounting glusterfs? Seems to really improve perf (and reduce network bandwidth) when repeatedly accessing files over gluster. Kinda almost seems it should be enabled by default
14:02 Technicool joined #gluster
14:04 kkeithley tqrst: oom killer maybe?
14:06 saurabh joined #gluster
14:06 tqrst kkeithley: unlikely. This workstation has 16G ram and rarely has heavy workloads.
14:08 tqrst (and there isn't anything that would point to that in kern.log)
14:09 kkeithley the oom killer is usually logged in /var/log/messages AFAIK
14:09 kkeithley but with 16G of ram you're probably correct about that not being the culprit
14:10 ahomolya joined #gluster
14:10 ahomolya hi all
14:10 ahomolya i have a nasty problem here
14:11 ahomolya i have just installed glusterfs-server on an ubuntu server 13.04, fresh install
14:11 ahomolya glusterfs 3.4
14:11 tqrst nothing in messages either
14:11 bugs_ joined #gluster
14:11 ahomolya after running # gluster peer probe stor-cassandra-6
14:12 ahomolya i got the following error
14:12 ahomolya peer probe: failed: Error through RPC layer, retry again later
14:12 hagarth tqrst: probably mount with log-level DEBUG next time around?
14:12 ahomolya on the other machines it runs flawlessly
14:13 ahomolya same enviornment
14:13 tqrst hagarth: will do
14:15 awheeler joined #gluster
14:15 dusmant joined #gluster
14:16 awheeler joined #gluster
14:17 wushudoin joined #gluster
14:19 robos joined #gluster
14:26 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
14:33 _Bryan_ joined #gluster
14:38 shylesh joined #gluster
14:38 sprachgenerator joined #gluster
14:45 badone joined #gluster
14:47 mohankumar__ joined #gluster
14:51 awheeler joined #gluster
14:52 mohankumar joined #gluster
14:53 anands joined #gluster
14:54 vpshastry joined #gluster
14:56 MrNaviPacho joined #gluster
14:59 manik joined #gluster
15:02 daMaestro joined #gluster
15:12 jag3773 joined #gluster
15:21 zerick joined #gluster
15:29 kaptk2 joined #gluster
15:30 hchiramm_ joined #gluster
15:37 davinder joined #gluster
15:45 plarsen joined #gluster
16:02 mooperd joined #gluster
16:06 manik joined #gluster
16:18 tjstansell anyone know what option would increase the log level for events related to peer probes, etc?
16:18 tjstansell as in, the stuff in etc-glusterfs-glusterd.vol.log
16:19 tjstansell i'm hoping there's a way to log what state glusterd puts peers in as it negotiates.
16:27 tqrst tjstansell: have a look at GLUSTERD_LOGLEVEL in /etc/sysconfig/glusterd
16:28 kanagaraj joined #gluster
16:28 tqrst tjstansell: that config file is read by /etc/init.d/glusterd. There might also be a way to update the log level without restarting the servers, but I'm not aware of it.
16:28 tjstansell thanks. that gives me a place to start...
16:31 ctria joined #gluster
16:31 LoudNoises joined #gluster
16:32 eightyeight joined #gluster
16:33 bennyturns JoeJulian, ping
16:37 jcsp joined #gluster
16:38 aliguori joined #gluster
16:40 Mo__ joined #gluster
16:46 vpshastry joined #gluster
16:47 tryggvil joined #gluster
16:54 B21956 joined #gluster
17:16 hagarth joined #gluster
17:22 lpabon joined #gluster
17:23 mooperd_ joined #gluster
17:23 plarsen joined #gluster
17:24 plarsen joined #gluster
17:31 mooperd joined #gluster
17:32 tjstansell joejulian: now that i'm working again, i'm trying to get more useful info (trace level debug logs) of it failing before filing the bug.
17:32 jporterfield joined #gluster
17:35 hagarth joined #gluster
17:43 vpshastry left #gluster
17:55 lalatenduM joined #gluster
17:59 robo joined #gluster
18:09 davinder joined #gluster
18:11 B21956 joined #gluster
18:12 mooperd joined #gluster
18:20 robo joined #gluster
18:23 tjstansell heh.  with trace level for glusterd, when my node i'm rebuilding gets it's volume information, it tries to create the run dir to store the brick pid number *before* it actually saves the volume info into /var/lib/glusterd/vols/<vol>, which fails and causes the bricks to not start.
18:24 tjstansell not sure if it's related to the trace log level, but i hadn't ever seen this before turning that on and it's done it twice in a row now.
18:31 davinder joined #gluster
18:32 JoeJulian bennyturns: pong
18:33 JoeJulian tjstansell: Odd. Why doesn't the /var/lib/glusterd/vols/<vol> directory exist? It's supposed to be there.
18:34 tjstansell this is in my case of rebuilding a node...
18:34 JoeJulian Ah, right...
18:34 tjstansell while it's pulling the volume info from the node that's still up, it apparently tries to start the brick before it actually writes the volume info to disk.
18:34 * JoeJulian beats tjstansell over the head for his use of the word "node" again...
18:35 tjstansell node, host, whatever :)
18:35 JoeJulian @glossary
18:35 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:35 bennyturns JoeJulian, Hi yas!  iirc I was talking about RHEL 5 issues a while back and I thought you commented about a known issue with fuse in el5 that caused terribad performance.  Do I remember correctly?
18:35 JoeJulian node=smurf
18:35 tjstansell heh
18:36 JoeJulian bennyturns: most (if not all) of the performance issues in EL5 are known.
18:37 bennyturns JoeJulian, right, someone had a question on https://bugzilla.redhat.com/show_bug.cgi?id=896314 and I responded back that I thought the root cause of that was already known.  I was just trying to put the details back together
18:37 glusterbot <http://goo.gl/LvAdhH> (at bugzilla.redhat.com)
18:37 glusterbot Bug 896314: high, medium, ---, spradhan, ASSIGNED , Bonnie++ runs taking exceedingly long time on replicated volume mounted over glusterfs on RHEL 5.9 client.
18:37 JoeJulian I do find it comical the parallels you can find between sysadmins and smurfs. When you get a large gathering of sysadmins, there's usually only one old guy, one female, and everyone uses the ambiguous term, "node" for every other thing.
18:38 bennyturns lolz
18:41 JoeJulian bennyturns: This is fuse or nfs?
18:41 Mo__ joined #gluster
18:41 bennyturns JoeJulian, fuse only, NFS works fine.
18:42 JoeJulian I wonder if the current fuse module can be compiled against 2.6.18...
18:47 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
18:55 RedShift does turning off performance.flush-behind garantuee that every brick in a 2 node replication setup will have the data written before returning OK to the client?
18:55 RedShift do I need to tune other variables for this?
18:58 RedShift JoeJulian lol, when I go to IT meetings, the closest thing to a female is a man with hear in a ponytail
18:59 RedShift *hair
19:03 jag3773 joined #gluster
19:03 JoeJulian ... just to be clear, old guys per that stereotype are over 65. My 44 years is NOT old!
19:04 * redragon_ agrees with JoeJulian about 44 not being old because that makes 45 not being old ....
19:04 redragon_ JoeJulian, so had a drive fail on a brick, dropped brick, rebuilt raid0, added brick, but replication seems really slow catching it up, any tunables for this?
19:04 RedShift ... just to be clear, old guys per that stereotype are over 40. My 26 years is NOT old! ;-)
19:05 JoeJulian :P
19:05 redragon_ JoeJulian, should we make sure we wrap him in his blanky before his nap?
19:05 partner can someone please point me to the url explaining uid/gid mappings, i seem to keep finding wrong pages ?
19:06 JoeJulian file:///dev/null
19:06 partner features/filter i guess
19:07 RedShift perception of age changes as you age yourself
19:07 JoeJulian partner: Yes and no... I'm pretty sure filter hasn't seen any love since 2.0.
19:07 RedShift your definition of "old" and "young" changes
19:07 redragon_ partner, not really sure what your looking for on uid/gid mappings? in relations to local users, network based users, what?
19:07 redragon_ ah, new I understand what he's looking for (more anyway)
19:09 partner lets say i have 10 servers doing the same stuff but installed at different times and in different order -> the uid/gid the gluster mount shows varies between the clients and thus causes permission issues
19:09 JoeJulian redragon_: cluster.data-self-heal-algorithm  and cluster.background-self-heal-count. Be sure and keep an eye on your load, though. Background self-heal also runs at a lower task priority to regular use.
19:10 JoeJulian chown -R
19:10 JoeJulian Also make sure you use the same uid/gid for each user. There is no working mapping (hence my link to /dev/null)
19:11 redragon_ partner, rule #1 of any network based file system (ie nfs) is match your uid/gid
19:11 partner chown would just ruin it for other server..
19:11 partner redragon_: yes we have that
19:11 partner BUT as gluster provides such functionality its not wrong to ask whether it can do the job
19:11 JoeJulian partner: scroll up
19:12 redragon_ didn't know it did mappings :)  i'm pretty new to gluster, thats why I bug joe so much heh
19:12 partner http://gluster.org/community/documentati​on/index.php/Translators/features/filter
19:12 glusterbot <http://goo.gl/b5BQ0> (at gluster.org)
19:12 partner JoeJulian: yeah, got it
19:12 partner pretty much suspected this is the case but didn't cost much to ask and be sure :)
19:13 RedShift JoeJulian are you a gluster developer or user?
19:13 partner such would just allow quite a bit of flexibility on mixed/legacy environments where remote filesystem usage is introduced afterwards
19:14 JoeJulian RedShift: I'm a user.
19:15 redragon_ JoeJulian, hmm interesting I see an option for auth.allow and auth.reject, access control via ip for access to volume
19:15 JoeJulian partner: I'm sure if someone wanted to get it working again and maintain it, that would be readily accepted.
19:16 partner Joe i would think so too. unfortunately i cannot volunter, no skills for such job.. :/
19:17 JoeJulian Let's get RedShift to do it. He's still young and impressionable.
19:17 RedShift wut
19:17 partner hihi
19:17 RedShift what needs to be done?
19:17 JoeJulian Learn C and update/maintain the filter translator.
19:17 RedShift what's wrong with it?
19:18 JoeJulian Not sure. Hasn't seen any love in years.
19:18 RedShift what does it do?
19:18 JoeJulian username mapping
19:19 RedShift pfr
19:19 RedShift not even applicable in my usage scenarios
19:19 JoeJulian Get it back into the mainline graph and partner will buy you a beer. :D
19:19 partner two
19:20 partner make it a case
19:20 partner two
19:20 RedShift why would usernames need to be mapped anyway, linux stores UID's
19:20 partner :)
19:20 partner Joe meant uid-mapping, see previous link on the channel
19:21 RedShift looks like a moot feature to me, not gonna spend time on that
19:21 JoeJulian No, it's username mapping. Maps the username from the client to the uid on the server.
19:22 JoeJulian And that's why it's dead. Everyone would just rather have the same uids/gids.
19:23 partner yeah, it just sucks to go into a machine and start rearraning users afterwards
19:23 partner and multiply that by nnn
19:24 RedShift I wonder, if you need tricks like that, didn't you start wrong in the first places... UID's are supposed to be the same everywhere
19:24 RedShift if you're doing something to mangle with stuff like that, you're doing it wrong IMO
19:27 partner you obviously haven't seen legacy around your premises
19:28 JoeJulian I had the same legacy. I used puppet trickery to manage the changes to make them all consistent.
19:29 JoeJulian http://stackoverflow.com/questions/12111954/con​text-switches-much-slower-in-new-linux-kernels
19:29 glusterbot <http://goo.gl/Te4A6V> (at stackoverflow.com)
19:30 JoeJulian Very "cool" info wrt idle states that would have a direct impact on context switches and fuse clients.
19:35 RedShift nice find
19:36 RedShift partner: I LOVE to clean up legacy shit
19:37 RedShift you can throw all the old black stuff to me
19:37 RedShift *black box
19:44 partner RedShift: i'm sure its fine if you have just bunch of it.. fortunately the issue is not big regarding the gluster usage, our clients are using same uid/gid currently, there's few exceptions thought but its not much considering thousands of services out there
19:47 jporterfield joined #gluster
19:58 jporterfield joined #gluster
20:02 jcsp joined #gluster
20:05 root_ joined #gluster
20:06 root_ hello, I'am trying a replica between my server and a raspberry pi that I've got at home, and I am experiencing empty files...any idea ?
20:06 yam25 they are big files (movies)
20:07 yam25 I've got 2 or 3 well syncronised, but all the rest are 0 size
20:08 yam25 it might be because the bandwidth home is not enough to synchronized everything ?
20:08 yam25 sorry my english is not very good
20:09 yam25 I'm using a pptp vpn to create a path between the remote and the home servers
20:09 yam25 I've issued split-brain command that shows nothing special:
20:09 yam25 root@raspberrypi:~# gluster vol heal vol1 info split-brain
20:09 yam25 Gathering Heal info on volume vol1 has been successful
20:09 yam25 Brick 192.168.15.50:/var/export/vol1
20:09 yam25 Number of entries: 0
20:09 yam25 Brick 192.168.15.30:/var/export/vol1
20:09 yam25 Number of entries: 0
20:11 yam25 well if someone would like to use my setup for testing purposes I would be keen to let him do, provided I can trust him of course, please contact me at juan.fco.rodriguez@gmail.com
20:11 yam25 I will disconnect now
20:11 yam25 ...
20:11 yam25 ...snif :(
20:12 * yam25 is a bit deceived nobody answers...
20:12 yam25 hello ?
20:13 yam25 left #gluster
20:13 rotbeard joined #gluster
20:19 plarsen joined #gluster
20:37 _Bryan_ what is the getfattr command to see all gluster extended attributes?
20:42 _pol joined #gluster
20:48 jag3773 joined #gluster
20:50 andreask joined #gluster
20:55 badone joined #gluster
20:59 B21956 left #gluster
21:01 cfeller joined #gluster
21:13 robo joined #gluster
21:28 flrichar joined #gluster
21:31 plarsen joined #gluster
21:41 JoeJulian ~extended attributes | _Bryan_
21:41 glusterbot _Bryan_: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
21:42 _Bryan_ JoeJulian:  Thanks...
21:50 theron joined #gluster
22:13 duerF joined #gluster
22:48 awheele__ joined #gluster
22:48 glusterbot New news from newglusterbugs: [Bug 1004091] SMB:smbd crashes while doing volume operations <http://goo.gl/2hQgXT>
22:52 sprachgenerator joined #gluster
23:16 jporterfield joined #gluster
23:28 StarBeast joined #gluster
23:33 aliguori joined #gluster
23:36 a2_ JoeJulian, thanks for sharing the link!
23:36 JoeJulian a2_: You bet. I happened on it by accident, but I really want a chance to try benchmarking that now.
23:40 a2_ on my desktop i get a latency of ~45us
23:40 a2_ it's quite a powerful desktop
23:41 a2_ trying to get some more analysis done from our perf team
23:48 glusterbot New news from newglusterbugs: [Bug 1004100] smbd crashes, likely due to mishandling of EA values in vfs_glusterfs and/or libgfapi <http://goo.gl/XRcTC1>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary