Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 theron joined #gluster
00:15 TheCthulhu joined #gluster
00:36 edwardm61 joined #gluster
00:45 nangthang joined #gluster
00:52 topshare joined #gluster
00:52 TheCthulhu3 joined #gluster
00:55 davidbitton joined #gluster
00:55 TheCthulhu joined #gluster
01:08 rndm joined #gluster
01:08 vmallika joined #gluster
01:09 rndm so is there a guide that describes how a high availability configuration works and what sort of guarantees exist with respect to hardware failure?
01:10 Glusternoob joined #gluster
01:11 Glusternoob Is the trusted.glusyerfs.quota.size in 3.7 still hex encoded?
01:11 theron joined #gluster
01:33 Lee1092 joined #gluster
01:46 topshare joined #gluster
02:21 ghenry joined #gluster
02:21 coredump joined #gluster
02:26 nangthang joined #gluster
02:29 lyang0 joined #gluster
02:32 dgbaley Is there a gluster alternative to the failure domains that can be defined in ceph's crush map?
02:38 dgandhi joined #gluster
02:59 harish joined #gluster
03:08 dgbaley Ah, this looks like it http://www.gluster.org/community/documentat​ion/index.php/Features/data-classification
03:10 overclk joined #gluster
03:17 bharata-rao joined #gluster
03:19 kovshenin joined #gluster
03:22 nishanth joined #gluster
03:29 TheSeven joined #gluster
03:33 shubhendu joined #gluster
03:36 julim joined #gluster
03:41 bennyturns joined #gluster
03:44 ashiq joined #gluster
03:46 soumya joined #gluster
03:49 mikedep3- joined #gluster
03:49 atinm joined #gluster
03:51 ghenry joined #gluster
03:52 Manikandan joined #gluster
03:53 vmallika joined #gluster
03:54 raghug joined #gluster
04:01 nbalacha joined #gluster
04:01 nbalacha joined #gluster
04:03 ppai joined #gluster
04:06 mikedep333 joined #gluster
04:09 RameshN_ joined #gluster
04:14 soumya_ joined #gluster
04:14 jdossey joined #gluster
04:15 kovshenin joined #gluster
04:16 kanagaraj joined #gluster
04:18 DV__ joined #gluster
04:22 hagarth joined #gluster
04:28 itisravi joined #gluster
04:39 meghanam joined #gluster
04:40 meghanam joined #gluster
04:45 anil joined #gluster
04:49 RameshN_ joined #gluster
04:52 sakshi joined #gluster
04:59 jiffin joined #gluster
04:59 Bhaskarakiran joined #gluster
05:08 sabansal_ joined #gluster
05:09 gem joined #gluster
05:11 coredump joined #gluster
05:12 ndarshan joined #gluster
05:12 atinm joined #gluster
05:14 topshare joined #gluster
05:15 jdossey joined #gluster
05:16 rafi joined #gluster
05:18 nbalachandran_ joined #gluster
05:19 rafi1 joined #gluster
05:20 vimal joined #gluster
05:21 glusterbot News from newglusterbugs: [Bug 1241341] Multiple DBus signals to export a volume that's already exported crashes NFS-Ganesha <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241341>
05:22 raghug joined #gluster
05:30 yazhini joined #gluster
05:31 anrao joined #gluster
05:32 pppp joined #gluster
05:37 nishanth joined #gluster
05:43 spandit joined #gluster
05:45 maveric_amitc_ joined #gluster
05:47 Manikandan joined #gluster
05:52 jordie joined #gluster
05:52 gem_ joined #gluster
05:52 soumya_ joined #gluster
05:52 kdhananjay joined #gluster
05:56 topshare joined #gluster
05:57 raghu joined #gluster
06:00 liewegas joined #gluster
06:05 liewegas joined #gluster
06:08 kshlm joined #gluster
06:08 astr hey guys, i get my logs flooded by 0-volume1-server: 22207: REMOVEXATTR /in/090715_34.xev (ce7aed87-6623-4266-9c42-da39b48113e1) of key security.ima ==> (No data available) and self healing is running almost constantly. anyone knows whats going on?
06:08 nishanth joined #gluster
06:12 hagarth astr: what version of gluster is this?
06:14 astr 3.5.4
06:17 hagarth astr: seems similar to https://bugzilla.redhat.co​m/show_bug.cgi?id=1144527
06:17 glusterbot Bug 1144527: unspecified, unspecified, ---, vbellur, CLOSED CURRENTRELEASE, log files get flooded when removexattr() can't find a specified key or value
06:18 shubhendu joined #gluster
06:18 hagarth astr: not sure about why self healing is running always. how are you determining that self-healing is happening forever?
06:21 astr hagarth: this seems to be the same bug as this https://bugzilla.redhat.co​m/show_bug.cgi?id=1192832 and should be fixed in my version
06:21 glusterbot Bug 1192832: unspecified, unspecified, 3.5.4, vbellur, CLOSED CURRENTRELEASE, log files get flooded when removexattr() can't find a specified key or value
06:21 atalur joined #gluster
06:21 astr my glustershd.log is flooded by afr-self-heal-common.c:3042:afr_​log_self_heal_completion_status] 0-volume1-replicate-0:  metadata self heal  is successfully completed,   metadata self heal from source
06:22 jtux joined #gluster
06:22 nsoffer joined #gluster
06:23 astr and i get a lot of files from "gluster volume heal VOLNAME info"
06:23 vmallika joined #gluster
06:23 hagarth astr: can you check what is the loglevel for this message?
06:24 hagarth itisravi: any help for astr here on self-healing?
06:26 astr hagarth: self healing and the REMOVEXATTR is INFO, but before every REMOVEXAXTTR i get this 0-volume1-marker: No data available occurred while creating symlinks which is ERROR
06:26 itisravi astr: what gluster version are you running?
06:27 itisravi astr: sorry, I see that you're running 3.5.4/
06:30 spandit joined #gluster
06:30 itisravi astr: could you check if the client can see all the bricks of the replica in question?
06:33 owlbot` joined #gluster
06:34 deepakcs joined #gluster
06:35 karnan joined #gluster
06:42 astr itisravi: which command is that?
06:44 itisravi astr:There is no command as such, but in the client log, you would see messages like "disconnected from volname-client-x".
06:45 itisravi astr: I'm just trying to guess the reason for files always needing constant healing. Could be due to the client not seeing one of the bricks and thus writes to the other brick would always need to be healed by the self-heal daemon.
06:51 glusterbot News from newglusterbugs: [Bug 1241379] Reduce 'CTR disabled' brick log message from ERROR to INFO/DEBUG <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241379>
06:57 astr itisravi: nothing that shows that the client can't connect. The log shows that the client could connect to both bricks. the only WARNING i get there is "could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info (No such file or directory)" which i don't think should be a problem
06:58 hagarth astr: can you please paste one complete log message?
06:58 astr [2015-07-08 09:41:34.481163] W [common-utils.c:2578:gf_get_reserved_ports] 0-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info (No such file or directory)
06:59 astr [2015-07-08 09:41:34.481191] W [common-utils.c:2611:gf_process_reserved_ports] 0-glusterfs: Not able to get reserved ports, hence there is a possibility that glusterfs may consume reserved port
07:10 raghug joined #gluster
07:13 topshare joined #gluster
07:15 [Enrico] joined #gluster
07:17 gem_ joined #gluster
07:21 nbalachandran_ joined #gluster
07:21 vimal joined #gluster
07:30 astr hagarth: they come everytime i mount a client. i just did a test and if i copy a file around within the gluster fs i get a log message that the file's metadata needed self healing
07:40 arcolife joined #gluster
07:50 dgbaley rafi1: ping
07:50 glusterbot dgbaley: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
07:50 rafi1 dgbaley: pong
07:50 dgbaley You said you were goign to write a test script, but I was able to verify gfapi+rdma using FIO
07:50 dgbaley as nonroot
07:50 dgbaley So something with qemu is bad, but I'll get to that later
07:51 rafi1 dgbaley: so rdma works for non-root user also
07:51 rafi1 dgbaley: right ?
07:51 dgbaley Unfortunately the FIO gfapi engine throws errors specifically for RDMA sequential reas
07:51 dgbaley rafi1: yes
07:52 rafi1 dgbaley: do you have any logs  ?
07:53 rafi1 dgbaley: so i guess there is no need to write a script anymore, ;)
07:53 dgbaley The gfapi client with debug=INFO, the brick logs, the glusterd logs, and the kernel logs on both ends were silent
07:53 dgbaley https://github.com/axboe/fio/blob/m​aster/engines/glusterfs_sync.c#L66
07:54 dgbaley I just ran 5 hours of benchmarks though so I didn't want to interrupt anything to look into it yet
07:54 rafi1 dgbaley: there is a limit for memory that would allow application to register with rdma
07:55 rafi1 dgbaley: may be you can try to set the limit to unlimited
07:55 dgbaley I set the memlock limit to unlimited on both ends
07:56 dgbaley Is that what you're referring to?
07:56 LebedevRI joined #gluster
07:57 rafi1 dgbaley: etc/security/limits.conf
07:57 rafi1 * soft memlock unlimited
07:57 rafi1 * hard memlock unlimited
07:57 rafi1 dgbaley: ^ exactly
07:58 dgbaley Yes, those are set already
08:00 rafi1 dgbaley: cool, I will try to reproduce this in my setup
08:00 fsimonce joined #gluster
08:00 rafi1 dgbaley: can you describe the exact reproducible steps ;)
08:01 rafi1 dgbaley: https://github.com/axboe/fio/blob​/master/engines/glusterfs_sync.c
08:01 dgbaley Yes, I'm getting an FIO command line now
08:06 dgbaley rafi1: fio --name=test --ioengine=gfapi --brick ss-61 --volume=ssd-rdma-nostripe --numjobs=1 --ramp_time=1 --runtime=5 --time_based --fallocate=keep --direct=1 --bs=16m --rw=read --size=2g --unlink=1
08:07 dgbaley Narrowing in on the issue, if I lower the 16m to e.g. 1m, no error. If I change to randread, and leave it at 16m there's an error. The benchmarks I sent to you have 16m sequential IO and 4k rand-io
08:09 rafi1 dgbaley: i have never used FIO commands before, so it might take some time to reproduce it,
08:10 rafi1 dgbaley: by the way, , are you using stripe volumes ?
08:10 dgbaley Oh, it should be simple: Just replace --brick and --volume
08:10 dgbaley rafi1: No, my test results using stripes had lots of perf issues / errors so I dropped them for now
08:11 dgbaley rafi1: Also, one of the advantages IMO of gluster over ceph is that I can have my Virtual Machine disks live as simple whole files on the backend FS
08:12 rafi1 dgbaley: ya, :)
08:14 * rafi1 is setting up his ib devices and gluster to run fio commands
08:14 dgbaley And also, in an environment with lots of concurrent users, I'm not convinced striping (or even the new sharding) really helps since the IO patterns should be well distributed anyway
08:14 Trefex joined #gluster
08:16 kshlm joined #gluster
08:18 hagarth dgbaley: why do you think that with sharding io patterns will not be well distributed?
08:18 dgbaley http://fpaste.org/242155/ $(volume info all). I have 3 nodes (and likewise a replication count of 3). Each node has 3 * Samsung 850 Pro 256G SSD and 9 * HGST UltraStar SAS 4TB HDD. They each also have 128G RAM and 2 * Xeon 8 core CPUs.
08:18 ajames-41678 joined #gluster
08:19 dgbaley That's overkill RAM and CPU I'm sure but I bought the hardware and didn't want to have to even consider  CPU or RAM as bottlenecks and also have the option of using Ceph which seems to need a lot more resources
08:19 dgbaley hagarth: I don't think that. I think that with lots of concurrent users the IO pattern is already even across disks so sharding/striping isn't so enticing
08:21 hagarth dgbaley: sharding also makes it easier for things like self-healing, tiering, rebalance etc. since they have to operate on smaller chunks than large entities (assuming you have large file workloads)
08:22 dgbaley rafi1: Last bit of info about the setup that I can think of. The 3 servers and 1 client are all fresh centos 7 installations with elrepo, epel, kernel-ml, the "infiiband support" group, and memlock limits set to unlimited.
08:22 karnan joined #gluster
08:22 anrao joined #gluster
08:23 dgbaley rafi1: And the NICs are Mellanox 40G QSFP Ethernet (So RDMA is RoCE)
08:23 dgbaley hagarth: thanks for those points
08:24 getessier joined #gluster
08:24 dgbaley hagarth: but I'm still recovering mentally from Ceph so I'd like to keep my whole files for now -- my workload is almost exclusively OpenStack Cinder volumes. However I do have 4TB of NFS home directories (as giant ceph RBDs) that I'd like to move as well
08:25 garmstrong joined #gluster
08:25 rafi1 dgbaley: ok.
08:26 hagarth dgbaley: yes, sharding is still in beta. we will continue improving behavior for both whole files and chunked objects.
08:27 rafi1 dgbaley: what is the --brick field ?
08:28 dgbaley rafi1: BTW, my complements on the RDMA implementation (not sure how involved you were originally) But my understanding is that despite having clear theoretical advantages, based on a lot of papers I've read it can be hard to realize in practice. If  you look at all those tests I ran. RDMA almost always beat out TCP, sometimes by a lot
08:28 dgbaley rafi1: --brick is what the gfapi ioengine author decided to call the hostname
08:28 glusterbot dgbaley: rafi1's karma is now -1
08:29 dgbaley ..., sorry?
08:31 rafi1 dgbaley: I'm glad to hear that, I did some performance boosting , and Raghavendra gowdappa did most of the implementation
08:31 shubhendu joined #gluster
08:31 rafi1 dgbaley: so credit goes to him raghu
08:33 rafi1 dgbaley: may be you can write a blog with your findings and link into the gluster site ;)
08:34 rafi1 dgbaley: if it is some thing that can be shared  :)
08:34 anoopcs redhat
08:34 rafi1 raghug ^
08:35 dgbaley I can share the data, I don't want that site to be public though. It's at my home, I'd need to move it to my university first
08:36 dgbaley I had some other dimensions I wanted to add. I want to try raid0 and raid6 and have 1 brick per machine. I also want to try ext4 and external journals.
08:37 rafi1 dgbaley: cool,
08:37 dgbaley I think the systems need to be tuned more. I was hoping the buffer/cache ram values would be a lot higher but right now 110 of 128G or so is sitting cold. I hope it can be an easy read-cache
08:39 rafi1 dgbaley: i started my tests, on simple distributed volume, with your commands
08:40 rafi1 dgbaley: let's see what happens
08:40 rafi1 dgbaley: did you see any Transport endpoint is not connected
08:40 dgbaley Yup, when I run FIO directly, that's what I see
08:41 rafi1 dgbaley: ok
08:41 dgbaley That "IO failed" is also in there too if you look carefully
08:41 dgbaley But I think transport endpoint not connected is more valuable
08:46 tanuck joined #gluster
08:47 karnan joined #gluster
08:50 rafi1 dgbaley: based on logs, it is something related to memory registration, I will get back you
08:52 sysconfig joined #gluster
08:53 dgbaley rafi1: cool, so you think it's a bug?
08:53 kotreshhr joined #gluster
08:53 rafi1 dgbaley: most probably, but need to confirm, because some times it is passing the tests with out any erros,
08:54 rafi1 dgbaley: I wiil let you know, if that is the case I will encourage you to file a bug
08:54 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
08:57 ctria joined #gluster
08:59 dgbaley 3AM here, going to bed. Thanks for your help.
09:00 atinm joined #gluster
09:01 itisravi_ joined #gluster
09:01 rafi1 dgbaley: good night,
09:01 rafi1 dgbaley: I will update you tomorrow
09:03 itisravi joined #gluster
09:04 kshlm joined #gluster
09:05 ndarshan joined #gluster
09:08 kaushal_ joined #gluster
09:09 atinm joined #gluster
09:20 kows joined #gluster
09:27 kows hello :) I have a distributed-replicated setup with two bricks and a client mounting the volume using the fuse client. One of the two bricks went offline this morning and on the client's logs I now see "0-gv0-client-0: remote operation failed: Bad file descriptor"
09:27 kows I seem to be able to access the volume fine from the client and I'm not sure what the error is about. Can someone please help me? :)
09:30 atalur_ joined #gluster
09:30 nangthang joined #gluster
09:33 ndarshan joined #gluster
09:33 jcastill1 joined #gluster
09:35 dusmant joined #gluster
09:36 nsoffer joined #gluster
09:37 kshlm joined #gluster
09:38 jcastillo joined #gluster
09:45 atinm joined #gluster
09:58 kovshenin joined #gluster
10:02 jmarley joined #gluster
10:04 jcastill1 joined #gluster
10:08 ndarshan joined #gluster
10:09 jcastillo joined #gluster
10:18 Manikandan joined #gluster
10:19 jdossey joined #gluster
10:22 glusterbot News from newglusterbugs: [Bug 1241480] ganesha volume export fails in rhel7.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241480>
10:23 kshlm joined #gluster
10:25 jcastill1 joined #gluster
10:26 Pupeno joined #gluster
10:27 Slashman joined #gluster
10:28 lejav joined #gluster
10:29 lejav Hello! I am evaluating GlusterFS and I have a question about read performance compared to write performace
10:30 lejav I have 12 storage nodes, configured in Ditributed-Replicated with 2 replicas
10:31 jcastillo joined #gluster
10:31 lejav I have 10 Gb network
10:32 lejav when I am doing a simple test with dd bs=1M count=102400 (100 GB file with blocksize=1M), I have good perfs when writing: around 550 MB/s - 10 Gb network is the bottleneck (2 replicas)
10:32 lejav but when reading, performances are less good: 470 MB/s
10:32 lejav I read one replica and read performances on the node itself might be better (around 800 MB/s)
10:32 lejav I tried several parameters:
10:33 lejav performance.read-ahead: on
10:33 lejav performance.read-ahead-page-count: 16
10:33 lejav performance.io-thread-count: 32
10:33 lejav performance.cache-size: 1GB
10:33 lejav Has somebody hints to increase these read performances?
10:35 atinm joined #gluster
10:48 Trefex joined #gluster
10:52 soumya_ joined #gluster
10:55 nsoffer joined #gluster
10:58 vmallika joined #gluster
11:03 kotreshhr joined #gluster
11:05 Manikandan joined #gluster
11:07 dusmant joined #gluster
11:10 ira joined #gluster
11:10 Jeroenpc joined #gluster
11:12 Jeroenpc I have a question: ovirt seems like a great product, but i need something more lightweight to manage gluster. Webmin seems like an obvious choice and google tells me there has been some talk here in this channel about a webmin module for gluster. What is the status of that?
11:12 Jeroenpc I have made a webmin module in the past. I may contribute to it if needed.
11:13 arcolife joined #gluster
11:19 kkeithley1 joined #gluster
11:20 jdossey joined #gluster
11:22 doekia joined #gluster
11:25 atalur joined #gluster
11:25 jcastill1 joined #gluster
11:25 raghug joined #gluster
11:30 jcastillo joined #gluster
11:34 harish_ joined #gluster
11:37 jmarley joined #gluster
11:37 ajames-41678 joined #gluster
11:40 kevein joined #gluster
11:43 plarsen joined #gluster
11:45 Trefex1 joined #gluster
11:49 dusmant joined #gluster
11:51 kotreshhr joined #gluster
11:59 kaushal_ joined #gluster
12:01 meghanam joined #gluster
12:01 bennyturns joined #gluster
12:01 lpabon joined #gluster
12:02 Philambdo joined #gluster
12:03 atinm joined #gluster
12:05 B21956 joined #gluster
12:07 hagarth joined #gluster
12:09 gildub joined #gluster
12:09 jtux joined #gluster
12:10 anrao joined #gluster
12:12 meghanam joined #gluster
12:16 andstr joined #gluster
12:21 jdossey joined #gluster
12:26 jiffin1 joined #gluster
12:28 elico joined #gluster
12:32 ChrisNBlum joined #gluster
12:32 unclemarc joined #gluster
12:34 kanagaraj joined #gluster
12:35 bjornar joined #gluster
12:38 pdrakeweb joined #gluster
12:40 andreas__ joined #gluster
12:41 jcastill1 joined #gluster
12:43 ChrisNBlum joined #gluster
12:44 julim joined #gluster
12:46 jcastillo joined #gluster
12:47 Asmadeus joined #gluster
12:50 soumya__ joined #gluster
12:51 jdossey joined #gluster
12:53 glusterbot News from newglusterbugs: [Bug 1236289] BitRot :- Files marked as 'Bad' should not be accessible from mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1236289>
12:53 glusterbot News from newglusterbugs: [Bug 1241529] BitRot :- Files marked as 'Bad' should not be accessible from mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241529>
12:54 raghug joined #gluster
12:55 jcastill1 joined #gluster
12:58 rwheeler joined #gluster
12:58 l0uis lejav: jumbo frames enabled?
13:00 jcastillo joined #gluster
13:01 ajames-41678 joined #gluster
13:01 dusmant joined #gluster
13:02 DV joined #gluster
13:05 magamo joined #gluster
13:05 magamo Heya folks.
13:05 magamo So, somehow one of my nodes is duplicated in peer status/pool list
13:05 magamo And this seems to be blocking volume status.
13:06 magamo Is there a way to drop the duplicate (with an erroneous UUID)?
13:06 topshare joined #gluster
13:07 topshare joined #gluster
13:08 topshare joined #gluster
13:11 cyberswat joined #gluster
13:12 shyam joined #gluster
13:19 aaronott joined #gluster
13:24 Trefex joined #gluster
13:25 aaronott1 joined #gluster
13:26 theron joined #gluster
13:26 georgeh-LT2 joined #gluster
13:32 Trefex1 joined #gluster
13:38 jrm16020 joined #gluster
13:41 mpietersen joined #gluster
13:44 kotreshhr left #gluster
13:46 ajames41678 joined #gluster
13:47 hamiller joined #gluster
13:48 Trefex joined #gluster
13:50 theron joined #gluster
13:51 DV joined #gluster
14:01 shubhendu joined #gluster
14:02 sblanton_ joined #gluster
14:02 marbu joined #gluster
14:03 mckaymatt joined #gluster
14:03 curratore joined #gluster
14:06 woakes070048 joined #gluster
14:11 calavera joined #gluster
14:14 ekuric joined #gluster
14:15 ajames-41678 joined #gluster
14:15 shyam joined #gluster
14:15 firemanxbr joined #gluster
14:17 mbukatov joined #gluster
14:18 cuqa__ joined #gluster
14:22 jdossey joined #gluster
14:29 overclk joined #gluster
14:31 mckaymatt joined #gluster
14:31 coredump joined #gluster
14:35 jiffin joined #gluster
14:35 jobewan joined #gluster
14:37 Norky joined #gluster
14:37 nsoffer joined #gluster
14:38 kshlm joined #gluster
14:39 wushudoin| joined #gluster
14:40 dgandhi joined #gluster
14:53 bennyturns joined #gluster
14:58 theron_ joined #gluster
15:01 topshare joined #gluster
15:02 lpabon joined #gluster
15:03 nbalachandran_ joined #gluster
15:06 kanagaraj joined #gluster
15:07 topshare joined #gluster
15:11 mckaymatt joined #gluster
15:11 jmarley joined #gluster
15:11 kkeithley1 joined #gluster
15:16 dgbaley rafi1: any luck?
15:17 rafi1 dgbaley: on the first look, it is failure in memory registration
15:18 rafi1 dgbaley: when trying to register 16MB of data with rdma device
15:18 DoctorWedge joined #gluster
15:19 rafi1 dgbaley: today, I couldn't find much time to look into, since I had some other planned works
15:19 rafi1 dgbaley: sorry
15:19 dgbaley No worries, just checking
15:19 DoctorWedge I know this is an old version of gluster, but how do you add an extra export onto 3.0.2? I've tried this: http://pastie.org/10282171 but I just get "eta_server: volume 'beta_brick' defined as subvolume, but no authentication defined for the same" (the top half is the original export I'm trying to add to)
15:19 rafi1 dgbaley: pls file a bug , if possible
15:19 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:19 jcastill1 joined #gluster
15:20 rafi1 dgbaley: I will be looking into it, over the weekend, is that fine ?
15:20 dgbaley Hah, of course.
15:21 rafi1 dgbaley: :)
15:23 shyam joined #gluster
15:25 jcastillo joined #gluster
15:25 jmarley joined #gluster
15:27 dgbaley What are these version numbers on bugzilla, why no 3.6, 3.7, etc?
15:28 jiffin joined #gluster
15:29 calavera joined #gluster
15:29 cholcombe joined #gluster
15:32 soumya_ joined #gluster
15:34 squizzi_ joined #gluster
15:35 vimal joined #gluster
15:41 fsimonce joined #gluster
15:42 jmarley joined #gluster
15:47 curratore hello, If I lose a brick, could I add a new one and resync the info with a rsync from other brick with the data?
15:47 afics joined #gluster
15:50 cholcombe i ran into an interesting problem if you deploy 3 gluster hosts on a single hard drive in containers.  If your code is too fast at bringing the cluster together you get this: Host 10.0.3.3 is not in 'Peer in Cluster' state
15:50 cholcombe i should say a single spinning hdd :).  SSD's don't have this issue
15:51 cholcombe i only notice it when the hard drive is 100% pinned and io wait is starting to pile up
15:52 kows joined #gluster
15:57 calavera joined #gluster
15:57 anoopcs dgbaley, You can file the bug under the version in which you observed the issue.
16:09 aaronott joined #gluster
16:14 PeterA joined #gluster
16:18 B21956 joined #gluster
16:20 B21956 joined #gluster
16:23 glusterbot News from newglusterbugs: [Bug 1229422] server_lookup_cbk erros on LOOKUP only when quota is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1229422>
16:26 georgeh-LT2 joined #gluster
16:29 pedahzur joined #gluster
16:31 jbautista- joined #gluster
16:36 jbautista- joined #gluster
16:39 wushudoin| joined #gluster
16:39 calavera joined #gluster
16:41 bennyturns joined #gluster
16:43 cholcombe joined #gluster
16:43 wushudoin| joined #gluster
16:45 vmallika joined #gluster
16:48 wushudoin| joined #gluster
16:54 curratore where could I find a detailed list of volume set options?
17:01 arthurh curratore, something like https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_setting_volumes.md ?
17:02 magamo Anyone know how to get rid of a duplicate host in the pool?
17:03 curratore arthurh: I am watching it, but I think something like http://www.gluster.org/community/documentation​/index.php/Gluster_3.2:_Setting_Volume_Options but updated
17:05 curratore I don’t know the new options and how configure them without break a good behaviour
17:06 arthurh curratore, ah.  unfortunately not, but if you do find something, let me know, I'm somewhat new to gluster as well and having the same troubles you're having with finding complete documentation for current versions (including more details about the ida translator and whatnot)
17:07 curratore arthurh: yep I am on it, I have a glusterfs working but I don’t know if could be better, I think could be but, test and error are my tools ;)
17:14 Rapture joined #gluster
17:17 firemanxbr joined #gluster
17:23 aaronott joined #gluster
17:26 calavera joined #gluster
17:34 rotbeard joined #gluster
17:56 ekuric joined #gluster
18:02 georgeh-LT2 joined #gluster
18:03 TheCthulhu1 joined #gluster
18:14 woakes070048 joined #gluster
18:15 meghanam joined #gluster
18:30 jcastill1 joined #gluster
18:34 calavera joined #gluster
18:35 jcastillo joined #gluster
18:44 bennyturns joined #gluster
18:47 calavera joined #gluster
18:54 cyberswat joined #gluster
18:54 glusterbot News from newglusterbugs: [Bug 1241666] glfs_loc_link: Update loc.inode with the existing inode incase if already exits <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241666>
19:14 pdrakeweb joined #gluster
19:15 jbautista- joined #gluster
19:17 jmarley joined #gluster
19:20 jbautista- joined #gluster
19:21 social joined #gluster
19:21 plarsen joined #gluster
19:38 shaunm_ joined #gluster
19:41 nsoffer joined #gluster
19:49 marcoceppi joined #gluster
19:53 Lee1092 joined #gluster
20:05 cyberswat joined #gluster
20:14 calavera joined #gluster
20:18 marcoceppi joined #gluster
20:18 marcoceppi joined #gluster
20:21 jcastill1 joined #gluster
20:26 jcastillo joined #gluster
20:26 DV joined #gluster
20:36 calavera_ joined #gluster
20:47 rwheeler joined #gluster
21:03 Pupeno_ joined #gluster
21:11 gildub joined #gluster
21:13 badone joined #gluster
21:21 marcoceppi joined #gluster
21:35 nage joined #gluster
21:45 cyberswa_ joined #gluster
21:49 abyss^ joined #gluster
22:22 cholcombe if you had a library that talked sun rpc would it be possible to talk to the gluster cluster directly without the CLI ?
22:56 al joined #gluster
23:01 cyberswat joined #gluster
23:05 davidbitton joined #gluster
23:11 DV joined #gluster
23:18 Rapture joined #gluster
23:19 wushudoin| joined #gluster
23:25 wushudoin| joined #gluster
23:31 JoeJulian cholcombe: https://github.com/sahlberg/libnfs
23:31 JoeJulian You can do that for talking to the nfs service.
23:31 cholcombe good point.  i forgot about that
23:32 JoeJulian Use can use libgfapi
23:32 cholcombe yeah i like libgfapi :)
23:32 cholcombe it just doesn't expose enough
23:32 cyberswat joined #gluster
23:32 JoeJulian orly? What're you looking for?
23:32 cholcombe well lets say i want to get the current quota data
23:33 cholcombe i can either use getfattr on the brick, parse the nasty CLI output
23:33 cholcombe or try to tie into the dynamic library and write a function that fills a struct for me
23:33 cholcombe i'd love for there to be a gfapi call to get that info :)
23:33 JoeJulian You can do the getfattr through libgfapi also.
23:33 cholcombe i'd use that in a second
23:33 JoeJulian but I see. It's still a string.
23:34 cholcombe i tried going down that route but the getfattr has to make the call to the brick backend
23:34 cholcombe i don't think the quota info is on the mounted virtual filesystem that gfapi uses
23:34 JoeJulian The attribute handler is in the quota translator.
23:35 cholcombe hmm?
23:35 JoeJulian The only difference, logically, between a fuse mount and libgfapi is the fuse translator. All the rest of the graph is still in play.
23:35 cholcombe i see
23:36 cholcombe right
23:36 cholcombe that makes sense
23:36 cholcombe so if all of the graph is still in play why is libgfapi exposing so little?
23:36 cholcombe i'd like to be able to skip the cli completely if i could
23:38 JoeJulian Yeah, I hear what you're saying there. I don't know. I do know that glusterd is a steaming pile of copy-paste that is unmanageable and needs to be redesigned. Perhaps that's when it would be a valid opportunity to add library support.
23:39 cholcombe hahaha
23:39 cholcombe yeah i've been keeping an eye on the rewrite emails that pop up every now and then
23:39 JoeJulian It *has* to be done by 4.0 (imho).
23:40 cholcombe oh..
23:40 cholcombe whys that?
23:40 JoeJulian There's a lot of things that will break in its current state.
23:40 JoeJulian Way too many to be properly tested
23:41 cholcombe i have to say though with redhat behind gluster the pace of releases certainly has picked up.  maybe they can devote some people to rewriting glusterd
23:42 JoeJulian Even now, look at 3.7.0. It got no community testing during beta and pretty critical bugs were exposed after release. With a more modular approach to glusterd, changes to one translator won't leave invalid pointers all over the rest of glusterd.
23:42 cholcombe yeah that really does make me nervous.  3.7 seemed brittle when it came out
23:44 * JoeJulian would like to tell avati "I told you so" but he's too nice.
23:44 cholcombe :D
23:45 JoeJulian But I have no say in how things move forward. I'm just a user.
23:45 cholcombe avati sent me a nice email today answering my xattr question about why the 3.7 field looks so funky
23:45 kudude joined #gluster
23:46 kudude hi everyone, i'm a glusterfs newbie and have a prob, i added a new brick to my stripe volume but when i run a rebalance it fails
23:46 kudude gluster volume rebalance volume1 start
23:47 kudude volume rebalance: volume1: failed: Volume volume1 is not a distribute volume or contains only 1 brick. Not performing rebalance
23:47 kudude does rebalances not work on a stripe volume?
23:47 JoeJulian You can change the stripe count when you add a brick? I thought that was fixed...
23:48 JoeJulian And no, there's no reason to rebalance a volume that does not use the distribute translator.
23:48 kudude yea i ran this:
23:48 kudude gluster volume add-brick volume1 stripe 5 newhost:/glusterfs/brick
23:48 kudude there was 4 previous bricks
23:49 JoeJulian I freely admit I know nothing about ,,(stripe) more than I wrote in my blog article. I've not had a valid use case for it.
23:49 glusterbot Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
23:49 kudude i added this one so the stripe count is 5, correct?
23:49 JoeJulian correct
23:49 kudude so rebalance is not necessary for stripe volumes?
23:55 kudude also, anyone have any performance tuning tips for gluster in a stripe volume set hosting large files??

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary