Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 gildub joined #gluster
00:05 suliba joined #gluster
00:12 nangthang joined #gluster
00:34 PaulePanter joined #gluster
00:35 DJClean joined #gluster
00:36 capri joined #gluster
00:37 d-fence joined #gluster
00:38 _fortis joined #gluster
00:52 zhangjn joined #gluster
00:53 partner joined #gluster
00:57 haomaiwa_ joined #gluster
01:00 haomaiwang joined #gluster
01:01 haomaiwa_ joined #gluster
01:03 johndescs_ joined #gluster
01:06 _joel joined #gluster
01:08 Pupeno joined #gluster
01:28 Lee1092 joined #gluster
01:31 dgbaley joined #gluster
01:51 julim joined #gluster
01:57 haomaiwa_ joined #gluster
02:01 haomaiwang joined #gluster
02:04 vmallika joined #gluster
02:09 nangthang joined #gluster
02:27 dgandhi joined #gluster
02:53 htrmeira joined #gluster
03:01 haomaiwa_ joined #gluster
03:01 overclk joined #gluster
03:03 vmallika joined #gluster
03:04 skoduri joined #gluster
03:08 Pupeno joined #gluster
03:14 bharata-rao joined #gluster
03:17 calavera joined #gluster
03:22 vmallika joined #gluster
03:25 skoduri joined #gluster
03:29 ron-slc joined #gluster
03:44 nbalacha joined #gluster
03:50 TheSeven joined #gluster
03:51 shubhendu__ joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 itisravi joined #gluster
04:08 RameshN joined #gluster
04:08 kanagaraj joined #gluster
04:12 neha joined #gluster
04:13 sakshi joined #gluster
04:13 LebedevRI joined #gluster
04:18 yazhini joined #gluster
04:22 ramky joined #gluster
04:34 kdhananjay joined #gluster
04:41 pppp joined #gluster
04:43 overclk joined #gluster
04:45 ramteid joined #gluster
04:47 ppai joined #gluster
04:48 gem joined #gluster
04:59 plarsen joined #gluster
05:01 haomaiwa_ joined #gluster
05:06 ndarshan joined #gluster
05:18 vimal joined #gluster
05:19 Bhaskarakiran joined #gluster
05:21 raghu joined #gluster
05:25 kdhananjay joined #gluster
05:29 kotreshhr joined #gluster
05:32 Manikandan joined #gluster
05:33 nishanth joined #gluster
05:36 yangfeng joined #gluster
05:38 kdhananjay joined #gluster
05:39 jiffin joined #gluster
05:45 ramky joined #gluster
05:49 hchiramm_ joined #gluster
05:50 hchiramm joined #gluster
05:51 SOLDIERz joined #gluster
05:52 SOLDIERz Hello everyone quick question can somebody explain me the following command a little bit: gluster volume top test-volume read-perf bs 256 count 1 brick server:/export/ list-cnt 10
05:52 SOLDIERz I thought this is displaying me the Read Performance per Brick but my output looks like the following:
05:54 SOLDIERz http://pastebin.com/6HaeV5ix
05:54 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
05:54 pranithk joined #gluster
05:55 SOLDIERz http://fpaste.org/270863/30740911/
05:55 glusterbot Title: #270863 Fedora Project Pastebin (at fpaste.org)
05:55 hgowtham joined #gluster
05:56 samsaffron___ joined #gluster
05:58 JoeJulian Looks like it's per fd
05:58 m0zes joined #gluster
06:00 SOLDIERz @JoeJulian hm?
06:00 amud joined #gluster
06:00 JoeJulian file descriptor
06:01 haomaiwa_ joined #gluster
06:09 mhulsman joined #gluster
06:10 SOLDIERz well file descriptor is a other check
06:11 SOLDIERz what i'm concerned about is how gluster volume top is calculating the read throughput without any throughput in front of the files?
06:12 pppp joined #gluster
06:18 overclk_ joined #gluster
06:20 vimal joined #gluster
06:23 jtux joined #gluster
06:28 rgustafs joined #gluster
06:34 tessier joined #gluster
06:34 atalur joined #gluster
06:37 nangthang joined #gluster
06:38 Guest11162 joined #gluster
06:39 rjoseph joined #gluster
06:45 spalai joined #gluster
06:45 dusmant joined #gluster
06:46 deepakcs joined #gluster
07:01 haomaiwang joined #gluster
07:08 arcolife joined #gluster
07:08 SarsTW Hi all, how to export NFS in Gluster with specific IPs for specific folders?
07:09 SarsTW If I set
07:09 SarsTW nfs.rpc-auth-allow: 192.168.1.1,192.168.1.2,127.0.0.1
07:09 SarsTW nfs.export-dir: /FolderA,/FolderB
07:09 SarsTW showmount -e will be
07:09 SarsTW /test3/FolderA 192.168.1.1,192.168.1.2
07:09 SarsTW /test3/FolderB 192.168.1.1,192.168.1.2
07:09 SarsTW But I need
07:09 SarsTW /test3/FolderA 192.168.1.1
07:09 SarsTW /test3/FolderB 192.168.1.2
07:10 Trefex joined #gluster
07:10 jiffin SarsTW: there is another export/netgroup feature in gluster-nfs
07:10 nbalacha joined #gluster
07:10 jiffin which is more similar to knfs export feature.
07:11 ndevos SarsTW: I think you can try nfs.export-dir "/FolderA(192.168.1.1),/FolderB(192.168.1.2)"
07:11 ndevos SarsTW: and, what jiffin said ;-) that gives you even more control, in a clearer way
07:12 ppai joined #gluster
07:12 jiffin ndevos: :)
07:14 ndevos jiffin: I fail to find a description on how to use it, do you know where (or if?) we have that?
07:15 jiffin ndevos: me too
07:15 jiffin ndevos: previously we had feature page
07:15 * jiffin is checking for that
07:16 ndevos jiffin: yeah... I dont see any of the feature pages on gluster.readthedocs.org anymore :-/
07:16 jiffin ndevos: http://gluster.readthedocs.org/en/latest/Feature%2​0Planning/GlusterFS%203.7/Exports%20and%20Netgroup​s%20Authentication/index.html?highlight=netgroup  seems to be dead
07:16 jiffin :/
07:16 ndevos hchiramm: do you know where the feature pages went? ^
07:18 jiffin rastar: any idea??
07:18 ghenry joined #gluster
07:18 ndevos hchiramm: the search still finds it though: https://readthedocs.org//search/?q=netgroups&​type=file&project=gluster&version=latest
07:18 glusterbot Title: Search: netgroups | Read the Docs (at readthedocs.org)
07:18 ghenry joined #gluster
07:18 ndevos but thats a 404 link it shows :-/
07:19 ramky joined #gluster
07:20 hchiramm_ ndevos, it moved to a seperate project
07:21 hchiramm_ https://github.com/gluster/glusterfs-specs
07:21 glusterbot Title: gluster/glusterfs-specs · GitHub (at github.com)
07:21 hchiramm_ jiffin, ndevos ^^^
07:21 ndevos hchiramm_: ah, right, and those are not part of the docs anymore?
07:21 hchiramm_ which is a mirror from http://review.gluster.org/#/a​dmin/projects/glusterfs-specs
07:21 glusterbot Title: Gerrit Code Review (at review.gluster.org)
07:22 hchiramm_ ndevos, if we want we can point this git repo to read the docs..
07:22 * hchiramm_ brb
07:23 ndevos hchiramm_: yeah, I think the ones in the done/ directory should be part of our documentation
07:23 SarsTW jiffin: It seems that Export/Netgroup is the new feature of 3.7. I'll try it!
07:23 ndevos jiffin, SarsTW: this link work, but it does not contain any examples... https://github.com/gluster/glusterfs-spec​s/blob/master/done/GlusterFS%203.7/Export​s%20and%20Netgroups%20Authentication.md
07:23 glusterbot Title: glusterfs-specs/Exports and Netgroups Authentication.md at master · gluster/glusterfs-specs · GitHub (at github.com)
07:24 jiffin ndevos: yes
07:24 ndevos SarsTW: yes, 3.7 is the 1st release that has it, and jiffin did a lot of work on it :)
07:24 Pupeno joined #gluster
07:25 Trefex joined #gluster
07:25 jiffin ndevos: we should add doc for the same for users, i will do that asap
07:26 ndevos jiffin++ thanks! I just wanted to ask you :D
07:26 glusterbot ndevos: jiffin's karma is now 1
07:26 [Enrico] joined #gluster
07:26 SarsTW jiffin, ndevos: Thanks a lot!
07:26 ndevos jiffin: send a patch for that feature page to Gerrit, or add a new doc to the glusterdocs repo on GitHub
07:28 jiffin ndevos:in my opinion , it is better to add a new doc in glusterdocs
07:28 jiffin SarsTW: np
07:28 ndevos jiffin: sure, I'll leave it up to you :)
07:35 mhulsman1 joined #gluster
07:50 overclk joined #gluster
07:53 ppai joined #gluster
08:01 haomaiwa_ joined #gluster
08:02 ctria joined #gluster
08:03 XpineX joined #gluster
08:05 Driskell joined #gluster
08:15 overclk joined #gluster
08:16 Driskell Does anyone know if the PPA packages for 3.7 are going to get the upstart job rather than init that fixes automount problems during boot? I can see the universe packages which are 3.4.2 contain the fix since January but it seems none of the PPA packages for 3.6 or 3.7 contain the fix at all. I'm wondering if there are any plans to merge that fix into the PPA packages since it currently means I have to manually pa
08:16 Driskell tch every system or downgrade to the 3.4 universe packaging :( Thanks for the help.
08:32 ndevos Driskell: would that be this bug? https://bugzilla.redhat.co​m/show_bug.cgi?id=1231983
08:32 glusterbot Bug 1231983: medium, unspecified, ---, bugs, NEW , Upstart job mounting-glusterfs.conf increases unnecessary 30 seconds in Ubuntu boot
08:34 overclk joined #gluster
08:34 ndevos hgowtham: you seem to be looking into Ubuntu packages, right? maybe you can check with Driskell
08:37 fala joined #gluster
08:38 zhangjn joined #gluster
08:43 maveric_amitc_ joined #gluster
08:45 anil joined #gluster
08:54 Guest60544 joined #gluster
08:54 Guest60544 left #gluster
08:56 So4ring joined #gluster
08:56 GB21 joined #gluster
08:59 hgowtham ndevos, I have done the packaging of gluster 3.6.6 for ubuntu.
09:00 ppai joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 Bhaskarakiran joined #gluster
09:09 s19n joined #gluster
09:11 s19n Hi all. Is there a tool which, given a path, will tell which brick or replica set it belongs to?
09:11 So4ring joined #gluster
09:16 jiffin s19n: may be this will be helpful getfattr -n trusted.glusterfs.pathinfo <file_path>
09:17 xrsanet left #gluster
09:22 s19n jiffin: "No such attribute"... I'm using 3.4.7, is that too old to have that attribute?
09:23 s19n ah! -- that was meant on the FUSE mountpoint, not the brick. Sorry
09:25 kotreshhr joined #gluster
09:26 s19n ok, now -- what if I have recently expanded the volume, completed a rebalance-layout and would like to check if a file is on a "wrong" brick and would be moved by a "data" rebalance?
09:37 So4ring joined #gluster
09:39 overclk joined #gluster
09:40 s19n If I understand http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/ correctly, the information I am looking for is not really attached to the file itself, but more on the directory containing it
09:40 Driskell @ndevos @hgowtham The problem I have was fixed here: https://bugs.launchpad.net/ubunt​u/+source/glusterfs/+bug/1268064 but only in the Ubuntu universe packaging which is 3.4, it seems the PPA packaging is still using the initscripts and not the upstart job specified there. Though it is interesting to know that even with the new upstart job there are issues!
09:45 Driskell ndevos: More information is in the source tree here: https://forge.gluster.org/glusterfs-core/glust​erfs/blobs/v3.7.4/extras/Ubuntu/README.Ubuntu
09:45 glusterbot Title: extras/Ubuntu/README.Ubuntu - glusterfs in GlusterFS Core - Gluster Community Forge (at forge.gluster.org)
09:47 Driskell hgowthan: thanks - be good to see if there's plans to include the fix in PPA as well as universe, and if there's any feeling on the bug ndevos mentioned
09:48 hgowtham Driskell, i'll check it out and let you know.
09:48 Driskell hgowtham: Thanks, really appreciate it :)
09:50 ndevos Driskell: thanks for the details, that should hgowtham some ideas on what to do/check/fix
09:53 hgowtham ndevos, i'll check on what should be done and then ping Driskell
09:53 ndevos hgowtham: sure, thanks!
09:54 s19n found, thanks: https://gluster.readthedocs.org​/en/release-3.7.0/Features/dht/
09:54 glusterbot Title: Distributed Hash Tables - Gluster Docs (at gluster.readthedocs.org)
10:01 haomaiwa_ joined #gluster
10:03 sakshi joined #gluster
10:05 Bhaskarakiran joined #gluster
10:05 spalai joined #gluster
10:06 Pupeno joined #gluster
10:06 ppai joined #gluster
10:08 kotreshhr joined #gluster
10:11 kkeithley1 joined #gluster
10:23 kdhananjay joined #gluster
10:31 nangthang joined #gluster
10:34 EinstCrazy joined #gluster
10:34 ppai joined #gluster
10:35 mhulsman joined #gluster
10:36 mhulsman1 joined #gluster
10:44 kotreshhr joined #gluster
10:48 haomaiw__ joined #gluster
10:49 haomaiwang joined #gluster
10:54 ssarah joined #gluster
10:55 ssarah Hallo guys :)
10:57 Bhaskarakiran joined #gluster
10:58 julim joined #gluster
10:58 Bhaskarakiran joined #gluster
11:01 haomaiwang joined #gluster
11:01 mhulsman joined #gluster
11:02 overclk joined #gluster
11:07 overclk_ joined #gluster
11:09 overclk__ joined #gluster
11:12 muneerse joined #gluster
11:13 kdhananjay joined #gluster
11:14 overclk joined #gluster
11:14 kotreshhr joined #gluster
11:15 amud Hi nbalacha : I am amudhan
11:17 nbalacha amud, hi
11:17 nbalacha amud, I have opened a private chat window, do you see it?
11:17 amud yes
11:18 nbalacha amud, I will continue this chat there
11:20 gildub joined #gluster
11:27 zhangjn joined #gluster
11:29 yazhini joined #gluster
11:29 ssarah Is there a quick start guide for ubuntu?
11:36 msvbhat ssarah: All the gluster docs are hier https://gluster.readthedocs.org/en/latest/
11:36 glusterbot Title: Gluster Docs (at gluster.readthedocs.org)
11:38 msvbhat ssarah: https://gluster.readthedocs.org/en/l​atest/Quick-Start-Guide/Quickstart/
11:38 glusterbot Title: Quick start Guide - Gluster Docs (at gluster.readthedocs.org)
11:38 msvbhat ssarah: I don't think there is a ubuntu specifc guide. But you should be able to get started from the above doc
11:42 Mr_Psmith joined #gluster
11:43 arcolife joined #gluster
11:44 _joel joined #gluster
11:45 jiffin1 joined #gluster
11:46 ssarah ty msvbhat
11:49 ssarah Can I skip the part about having to virtual disks and use plain directories instead?
11:50 ssarah I didnt provision the machines like that (with two disks). Although I don't know if there's an easy way to do that now.
11:53 bluenemo joined #gluster
11:54 msvbhat ssarah: Plain dirctories technically work (If you are just testing it out). But gluster throws warning.
11:54 ppai joined #gluster
11:54 msvbhat ssarah: And some features like snapshot do not work when you don't provision with thinp lv
11:55 bluenemo hi guys. I've copied 300GB of files (mostly jpg) via rsync to my new 2 server / 2 rep setup. using the mount -t glusterfs, I had about 1MB/s write performance with rsync. Mounting on the Client side via NFS and then rsyncing did a LOT of difference! I have at least two to three times the random write speed. I used noatime with nfs
11:59 ssarah ty
12:01 haomaiwa_ joined #gluster
12:01 RameshN joined #gluster
12:05 overclk joined #gluster
12:05 kotreshhr left #gluster
12:05 eljrax bluenemo: Yeah, I'm seeing the same thing. Using the native client gives an extreme performance hit on operations like that
12:05 eljrax I even get OOM killing glusterfs when rsyncing lots of data
12:06 eljrax And for less streamlined applications, like Magento, putting the web root on GlusterFS results in a load time of like 10 seconds, with all the includes/require_once etc. Even with an opcode cache
12:12 bluenemo eljrax, thats pretty much what I want to do but yes, I figured using the native client wont be for me here. Do you run prodcution php like magento with clients mounting via NFS? If so, do you use cachefilesd with it? or any other performance tuning stuff for NFS? I'm not sure if I should configure my nfs clients like I normally would. didnt test it yet, my setup is currently in development.
12:12 ndk joined #gluster
12:13 bluenemo didnt see gluster killed yet though.. however with 1mb/s, I didnt expect it to ;)
12:13 jtux joined #gluster
12:14 eljrax bluenemo: We're quite keen on HA, so for some setups, we've got up to three glusterfs replicas, running on the webservers themselves, and NFS is mounted locally
12:14 eljrax But for setups where it doesn't make sense to use as many replicas, we've stopped using glusterfs
12:14 eljrax Apart from very low-traffic setups
12:14 bluenemo Tbh I've tested glusterfs for a few days - wrote a Saltstack formula so testing goes faster.. I like the ideas but I'm a bit worried about migrating production there. I know my page(s) will take the load with $my_normal_nfs setup, but then it wont be as redundant as it would be with gluster. btw I have storage servers with gluster and mount-only web clients
12:15 bluenemo eljrax, whats your current HP / HA solution for shared storage?
12:15 portante joined #gluster
12:15 bluenemo I thought about ceph but the target setup is to small..
12:16 bluenemo Maybe I'll just use NFS and make it more or less HA but setting up a failover node. I really dislike corosync / pacemaker though.. REALLY dislike it ;)
12:16 eljrax bluenemo: It's ranging quite widely. Anything from netapp NAS, to NFS on redhat clusters to glusterfs on DAS to lsyncd :)
12:16 eljrax Granted the latter doesn't really count in this context, but it does solve the same problem for some setups
12:17 bluenemo So you would not recommend gluster with nfs clients for PHP sites that might hit 50/100 concurrent users?
12:17 eljrax GlusterFS is without a doubt my favourite in terms of setup, ease of use and interface.. But the performance is just killing it
12:17 bluenemo yeah I see it the same way. I like alotta the concepts.. but yeah. its SLOW ;/
12:17 ppai joined #gluster
12:17 eljrax I wouldn't discount it, but here it doesn't always make sense. It's so network hungry, that to deploy it in the cloud, you need to have huge flavors of VMs
12:18 eljrax Even if your app can happily run on a couple of 2G instances, the network throughput you get with instances that small just doens't cut it for glusterfs
12:18 eljrax But on dedicated hardware, where you can beef up the network, there are already established and well-tested solutions for shared files
12:19 eljrax I kind of get why it's slow, the on-access self-healing is partially what makes GlusterFS awesome for consistency, but it's also what kills it in some worklaods
12:19 bluenemo I'm on amazon
12:20 bluenemo wanted to have 4 webworkers in front of a Load Balancer from Amazon and then have like two storage backends with gluster. web0 & 1 in AZ-A, web 10&11 in AZ-B, same for storage, so the traffic has to run from Availability Zone to Availability Zone. Imho another setup wouldnt make much sense - I have to use the AZ's..
12:21 bluenemo So the other Idea would be setup normal NFS Server in AZ-A, connect clients form both AZ's and failover via IP when the master NFS dies, starting the NFS Server on file backend 2. Would need manual intervention, but I'm ok with that for now.
12:21 bluenemo Which with gluster and the native client, I wouldnt need..
12:21 eljrax I'm not tremendously up to scratch on Amazon lingo, but iirc cross-AZ bandwidth/latency might not work enormously well for you then
12:21 bluenemo using nfs I would need manual intervention again, as I can only mount form one srv
12:22 bluenemo Its not really nifty, no..
12:23 bluenemo But I'm not really happy with having two NFS and some cron rsync between them..
12:23 bluenemo or maybe DRBD with OCFS2 but then I'll have to do the corosync/pacemaker dance again.. and that just never lets me sleep well.
12:23 eljrax No, that doesn't sit right, does it? :)
12:24 eljrax If only everybody could use object stores and proper deployment pipelines, eh ? :P
12:27 unclemarc joined #gluster
12:27 bluenemo haha :D YES. but well. To be honest I'm waiting for Amazon to bring out this fancy new shared filesystem of theres.
12:28 bluenemo Hm. let me go for a lunch break. I'll think about hand-failover NFS. But I guess I'll have to do that. meh :( Wished I had enough scale for ceph ;)
12:29 eljrax I don't know how ceph performs compared to glusterfs for the same workload
12:30 bluenemo I did some tests about 2 years ago with like 20 workstations in the 100 office network M) well, we had more than 1mb/s write.
12:32 jrm16020 joined #gluster
12:36 overclk joined #gluster
12:37 suliba joined #gluster
12:40 spalai left #gluster
12:42 sblanton hi everyone! I'm newish to gluster. I've inherited it and have cleaned up a bit an brought everything to v3.5.5
12:42 ppp joined #gluster
12:43 sblanton Just wondering where you might point someone trying to resolve  E [afr-self-heal-common.c:3042:afr_​log_self_heal_completion_status] 0-garchive-1-replicate-7:  entry self heal  failed,    on /x/y/z
12:43 sblanton is there a good doc or page?
12:44 sblanton an 'ls' on the affected path hangs indefinitely
12:45 bluenemo whats gluster volume my_vol status ?
12:46 prasanth_ joined #gluster
12:47 sblanton status looks good. one sec
12:48 sblanton All the bricks are online.
12:49 sblanton I do have two nfs servers not online, but my understanding is we are not using nfs, but native gluster to mount
12:49 sblanton Vol status is that at a rebalance is in progress
12:49 sblanton all self-heal daemon's are online
12:50 theron joined #gluster
12:50 paraenggu joined #gluster
12:50 ssarah ty msvbhat, it was really easy to setup using that guide. You know if there is a quick guide to creating a client machine? I'm reading this http://gluster.readthedocs.org/en/late​st/Administrator%20Guide/Setting%20Up%​20Clients/index.html?highlight=client, and it's mentioning stuff like fuse that wasnt in the quick guide you gave me.
12:50 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.org)
12:50 ssarah nice bot
12:52 kotreshhr joined #gluster
12:53 hagarth joined #gluster
12:53 neha joined #gluster
12:55 paraenggu Hi there, I'm seeking guidance regarding the "cluster.read-hash-mode" option. We have a 2 node replicated Gluster 3.5 volume, used for VM image storage via gfapi, and noticed that almost all read operations are happening only one one node. We hope that we can better distribute the load with the help of the "cluster.read-hash-mode" option. Does anyone has experience with it? I'm not sure if I'm on the right track and which mode (1 or 2) w
12:57 ppai joined #gluster
12:57 shaunm joined #gluster
12:57 jiffin joined #gluster
12:57 bennyturns joined #gluster
13:00 sakshi joined #gluster
13:04 sblanton joined #gluster
13:06 kotreshhr left #gluster
13:10 sblanton joined #gluster
13:12 sblanton paraenggu - interested in knowing similar...
13:12 EinstCrazy joined #gluster
13:20 maveric_amitc_ joined #gluster
13:22 marlinc joined #gluster
13:24 firemanxbr joined #gluster
13:24 paescuj paraenggu, sblanton: also interested in
13:33 harold joined #gluster
13:36 Bhaskarakiran joined #gluster
13:41 theron_ joined #gluster
13:42 pranithk left #gluster
13:43 haomaiwa_ joined #gluster
13:43 dgandhi joined #gluster
13:44 dlambrig_ joined #gluster
13:44 dgandhi joined #gluster
13:45 zhangjn joined #gluster
13:47 zhangjn joined #gluster
13:48 zhangjn joined #gluster
13:49 shyam joined #gluster
13:53 mpietersen joined #gluster
13:55 mpietersen joined #gluster
13:59 zhangjn joined #gluster
14:01 haomaiwa_ joined #gluster
14:07 skylar joined #gluster
14:11 kdhananjay joined #gluster
14:11 a_ta joined #gluster
14:28 bennyturns joined #gluster
14:29 plarsen joined #gluster
14:33 maveric_amitc_ joined #gluster
14:34 neofob joined #gluster
14:38 julim joined #gluster
14:40 EinstCrazy joined #gluster
14:40 zhangjn joined #gluster
14:41 zhangjn joined #gluster
14:42 zhangjn joined #gluster
14:42 zhangjn joined #gluster
14:43 social joined #gluster
14:43 _maserati joined #gluster
14:44 zhangjn joined #gluster
14:51 nangthang joined #gluster
14:51 shyam joined #gluster
14:52 mpietersen joined #gluster
14:54 mpietersen joined #gluster
14:56 mpietersen joined #gluster
14:57 s19n https://gluster.readthedocs.org​/en/release-3.7.0/Features/dht/ says: "The directory commit hash is set to the volume commit hash when the    directory is created, and whenever the directory is fully rebalanced so that    all files are at their hashed locations."
14:57 glusterbot Title: Distributed Hash Tables - Gluster Docs (at gluster.readthedocs.org)
14:57 mpietersen joined #gluster
14:58 s19n on a newly created directory I am seeing trusted.glusterfs.dht=0x00000​00100000000000000002aaaaaa9, so volume commit hash != directory commit hash.
14:59 s19n any idea on what could be the reason?
15:01 poornimag joined #gluster
15:01 haomaiwa_ joined #gluster
15:01 Bhaskarakiran joined #gluster
15:02 cholcombe joined #gluster
15:04 calavera joined #gluster
15:05 shaunm joined #gluster
15:09 zhangjn_ joined #gluster
15:18 nangthang joined #gluster
15:20 Bhaskarakiran joined #gluster
15:26 zhangjn joined #gluster
15:28 zhangjn joined #gluster
15:28 zhangjn joined #gluster
15:35 zhangjn_ joined #gluster
15:37 Philambdo joined #gluster
15:46 jiffin joined #gluster
15:46 zhangjn joined #gluster
15:52 jiffin joined #gluster
15:56 rwheeler joined #gluster
15:57 squizzi_ joined #gluster
15:57 jiffin1 joined #gluster
16:05 s19n no hints?
16:05 jiffin joined #gluster
16:07 s19n another question: which part of a gfid is used to determine the brick which will contain the file?
16:08 s19n how do I check if a gfid falls inside the range indicated by the trusted.glusterfs.dht xattr?
16:10 s19n answers are welcome even after that I disconnect, thanks! :)
16:13 jiffin1 joined #gluster
16:15 ekuric joined #gluster
16:19 skoduri joined #gluster
16:20 Trefex1 joined #gluster
16:21 JoeJulian s19n: google 'dht misses are expensive'
16:24 shortdudey123_ joined #gluster
16:24 bcicen_ joined #gluster
16:24 _maserati_ joined #gluster
16:24 Ramereth joined #gluster
16:24 wolsen joined #gluster
16:24 neofob joined #gluster
16:24 kbyrne joined #gluster
16:25 coreping joined #gluster
16:25 kkeithley joined #gluster
16:27 ndk joined #gluster
16:29 bluenemo I had a client mount gluster via nfs, rebooted both nodes, now the client has a load average of 100 - however nothing showing up waiting for i/o in top.
16:29 bluenemo client is still responsive to ssh normally
16:30 poornimag joined #gluster
16:31 bluenemo this seems worth reading in this situation http://www.netapp.com/us/media/tr-3183.pdf
16:31 bluenemo guess its more of a nfs hard mounting thing.
16:33 tru_tru joined #gluster
16:37 overclk joined #gluster
16:52 _maserati joined #gluster
16:59 zhangjn_ joined #gluster
17:02 theron joined #gluster
17:06 kotreshhr joined #gluster
17:06 kotreshhr left #gluster
17:08 zhangjn joined #gluster
17:10 mreamy joined #gluster
17:15 RedW joined #gluster
17:15 So4ring joined #gluster
17:19 Rapture joined #gluster
17:21 maveric_amitc_ joined #gluster
17:24 rwheeler joined #gluster
17:27 Pupeno_ joined #gluster
17:30 Rapture joined #gluster
17:32 calavera joined #gluster
17:34 suliba joined #gluster
17:35 poornimag joined #gluster
17:37 overclk joined #gluster
17:40 overclk joined #gluster
17:44 So4ring joined #gluster
17:49 jiffin joined #gluster
17:52 shyam joined #gluster
18:00 amye joined #gluster
18:17 calavera joined #gluster
18:19 theron joined #gluster
18:25 jobewan joined #gluster
18:25 jrm16020 joined #gluster
18:29 amye joined #gluster
18:42 jiffin1 joined #gluster
18:45 papamoose joined #gluster
18:51 Jeroenpc joined #gluster
18:54 jiffin1 joined #gluster
18:54 mhulsman joined #gluster
19:01 chirino joined #gluster
19:01 Rapture joined #gluster
19:03 Rapture joined #gluster
19:11 calavera joined #gluster
19:28 theron joined #gluster
19:34 sblanton joined #gluster
19:34 paraenggu joined #gluster
19:36 _maserati joined #gluster
19:37 shaunm joined #gluster
19:40 pdrakeweb joined #gluster
19:44 bennyturns joined #gluster
20:06 calavera joined #gluster
20:09 woakes070048 joined #gluster
20:11 DV joined #gluster
20:24 htrmeira joined #gluster
20:26 afics joined #gluster
20:28 fala joined #gluster
20:43 poornimag joined #gluster
20:46 chirino joined #gluster
20:47 chirino joined #gluster
20:47 skoduri joined #gluster
20:57 Lee- left #gluster
21:03 skylar joined #gluster
21:40 DJCl34n joined #gluster
21:40 rwheeler joined #gluster
21:43 DJClean joined #gluster
21:44 johnmark joined #gluster
21:45 poornimag joined #gluster
21:48 csim johnmark: hey, just the one I was looking for, you know who is the primary contact for rackspace gluster account ?
21:48 csim ( ie, you, or justin ? )
21:51 amye johnmark: can I have it if it's you? :D
22:07 DV joined #gluster
22:09 zhangjn joined #gluster
22:12 zhangjn joined #gluster
22:13 harold joined #gluster
22:13 zhangjn joined #gluster
22:30 beeradb_ joined #gluster
22:34 plarsen joined #gluster
22:36 zhangjn joined #gluster
22:38 zhangjn joined #gluster
22:55 campee joined #gluster
22:56 campee what exactly determines the size of a gluster brick? i have a volume made up of a single brick that shows up as 16GB when I mount it. The brick lives on a file system that has 300GB free on it and I would like to expand the size of the brick. How can I do that?
22:57 calavera joined #gluster
22:57 zhangjn_ joined #gluster
23:03 campee it doesn't have a quota..
23:05 campee if i run 'gluster volume info' i don't even see the size of the volume
23:09 paraenggu1 joined #gluster
23:12 alghost joined #gluster
23:12 campee halp
23:17 DavidVargese joined #gluster
23:23 paraenggu1 left #gluster
23:24 rohit joined #gluster
23:25 Guest43920 Hi.. I am a linux newbie and looking for help to run glusterfs on NixOS
23:29 poornimag joined #gluster
23:41 htrmeira joined #gluster
23:58 sage joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary