Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 jaank joined #gluster
00:31 TheCthulhu joined #gluster
00:33 mikedep333 joined #gluster
00:35 vmallika joined #gluster
00:39 bennyturns joined #gluster
00:40 nangthang joined #gluster
00:45 B21956 joined #gluster
00:48 sankarshan_ joined #gluster
00:53 topshare joined #gluster
00:57 cholcombe joined #gluster
01:17 DV__ joined #gluster
01:20 jcastill1 joined #gluster
01:23 topshare joined #gluster
01:24 harish joined #gluster
01:25 jcastillo joined #gluster
01:38 natarej_ dgbaley, whats the reason why there are no sequential read results for rdma on gluster?
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 kdhananjay joined #gluster
01:51 natarej_ just looking at his results at https://fio.monaco.cx/ now, i was sleeping before.  so surpised that ceph is outperforming gluster over TCP for large sequential reads
01:52 natarej_ not surprised at all that it performs abysmally at everything else
01:53 natarej_ i was not expecting to see a 50% difference in IOPs from TCP to RDMA
01:55 nangthang joined #gluster
02:05 shyam joined #gluster
02:13 jaank joined #gluster
02:22 prg3 joined #gluster
02:23 side_control joined #gluster
02:36 aravindavk joined #gluster
02:40 cell_ joined #gluster
02:40 cell_ hi there
02:40 dgbaley natarej_: https://bugzilla.redhat.co​m/show_bug.cgi?id=1241621 <-- I hit a bug when doing the seq read tests
02:40 glusterbot dgbaley: <'s karma is now -15
02:40 glusterbot Bug 1241621: medium, medium, ---, rkavunga, ASSIGNED , gfapi+rdma IO errors with large block sizes (Transport endpoint is not connected)
02:41 dgbaley natarej_: ceph is benefiting there by striping, which I chose not to use for gluster. But it looked to me like it was only for 1 or 2 concurrent jobs, once the system becomes more saturated, the difference shrinks
02:42 dgbaley natarej_: IOPs is exactly where RDMA should benefit over TCP
02:43 dgbaley natarej_: for throughput, the differences should be small in theory, but if you wanted to double up storage and compute nodes, then RDMA for large sequential IO should free up resources for VMs
02:43 bharata-rao joined #gluster
02:57 meghanam joined #gluster
02:59 lkthomas joined #gluster
02:59 lkthomas hey all
03:09 haomaiwang joined #gluster
03:11 craigcabrey joined #gluster
03:22 TheSeven joined #gluster
03:29 Peppard joined #gluster
03:30 topshare joined #gluster
03:31 PatNarcisoZzZ joined #gluster
03:34 nishanth joined #gluster
03:36 natarej_ dgbaley, you said that reads are 'supposedly striped' with replication 3?
03:37 atinm joined #gluster
03:40 overclk joined #gluster
03:41 badone joined #gluster
03:42 elico joined #gluster
03:45 Lee1092 joined #gluster
03:47 dusmant joined #gluster
03:49 victori joined #gluster
03:52 nishanth joined #gluster
03:56 autoditac joined #gluster
03:57 ppai joined #gluster
03:57 kanagaraj joined #gluster
03:58 meghanam joined #gluster
04:02 shubhendu joined #gluster
04:05 chirino_m joined #gluster
04:06 RameshN joined #gluster
04:10 sakshi joined #gluster
04:13 Pupeno joined #gluster
04:13 nbalacha joined #gluster
04:16 _ndevos joined #gluster
04:17 victori joined #gluster
04:17 itisravi joined #gluster
04:18 overclk joined #gluster
04:22 yazhini joined #gluster
04:22 calavera joined #gluster
04:31 topshare joined #gluster
04:31 dgbaley natarej_: yes, that's a note to some of the faculty in my department, because we were wondering about it
04:41 spandit joined #gluster
04:41 overclk joined #gluster
04:41 gildub joined #gluster
04:43 meghanam joined #gluster
04:43 gem joined #gluster
04:44 ndarshan joined #gluster
04:45 overclk_ joined #gluster
04:47 ramteid joined #gluster
05:00 jiffin joined #gluster
05:13 rafi joined #gluster
05:14 meghanam joined #gluster
05:16 hchiramm joined #gluster
05:16 pppp joined #gluster
05:20 smohan joined #gluster
05:26 anil joined #gluster
05:31 hgowtham joined #gluster
05:31 Manikandan joined #gluster
05:31 vikumar joined #gluster
05:33 rafi joined #gluster
05:34 kdhananjay joined #gluster
05:35 kdhananjay joined #gluster
05:37 ashiq joined #gluster
05:38 chirino joined #gluster
05:42 Bhaskarakiran joined #gluster
05:45 dusmant joined #gluster
05:46 soumya joined #gluster
05:47 RameshN joined #gluster
05:51 topshare joined #gluster
05:52 autoditac joined #gluster
05:57 jiffin1 joined #gluster
05:58 Saravana_ joined #gluster
06:00 soumya joined #gluster
06:01 jordie joined #gluster
06:03 maveric_amitc_ joined #gluster
06:09 deepakcs joined #gluster
06:15 pppp joined #gluster
06:16 meghanam joined #gluster
06:16 kdhananjay joined #gluster
06:20 jtux joined #gluster
06:28 autoditac joined #gluster
06:32 elico joined #gluster
06:32 overclk joined #gluster
06:33 topshare joined #gluster
06:40 kovshenin joined #gluster
06:42 jiffin joined #gluster
06:57 saurabh_ joined #gluster
06:58 uebera|| joined #gluster
06:59 nangthang joined #gluster
07:04 LebedevRI joined #gluster
07:07 maveric_amitc_ joined #gluster
07:07 topshare_ joined #gluster
07:09 [Enrico] joined #gluster
07:18 jcastill1 joined #gluster
07:19 ninkotech_ joined #gluster
07:23 jcastillo joined #gluster
07:25 zerick_ joined #gluster
07:32 mlhess joined #gluster
07:33 itisravi_ joined #gluster
07:33 itisravi_ joined #gluster
07:33 vmallika joined #gluster
07:37 topshare joined #gluster
07:37 fsimonce joined #gluster
07:37 Slashman joined #gluster
07:38 natarej__ joined #gluster
07:41 Trefex joined #gluster
07:50 ctria joined #gluster
07:54 _maserati_ joined #gluster
07:55 kdhananjay1 joined #gluster
07:56 topshare joined #gluster
08:04 gem joined #gluster
08:17 [Enrico] joined #gluster
08:19 ajames-41678 joined #gluster
08:20 edualbus joined #gluster
08:22 nadley joined #gluster
08:22 nadley hi all
08:24 nadley I'm trying to mount my datastore through a volume config file as explained here http://sysadminnotebook.blogspot.fr/2014/05/​set-up-glusterfs-with-volume-replicated.html but it doesn't work... I have the same error as this post http://www.spinics.net/lists​/gluster-users/msg19232.html but in the thread there is no real solution
08:25 nadley even if I can put multiple servers on backupvolfile-server mount option
08:27 ira joined #gluster
08:27 nbalacha joined #gluster
08:29 itisravi joined #gluster
08:34 topshare joined #gluster
08:36 arcolife joined #gluster
08:41 overclk joined #gluster
08:45 shubhendu joined #gluster
08:55 abyss^ nadley: why you don't use official manual for glusterfs which is great? Check the logs for clients and server. Try to set up volumes with official guide and if it works then try to play with it:)
08:55 ndarshan joined #gluster
08:57 jiffin1 joined #gluster
08:59 arcolife joined #gluster
09:01 meghanam joined #gluster
09:02 edualbus joined #gluster
09:08 sakshi joined #gluster
09:08 meghanam joined #gluster
09:18 ira joined #gluster
09:22 rjoseph joined #gluster
09:24 hagarth joined #gluster
09:28 nadley abyss^: I did it without the volume file
09:28 overclk joined #gluster
09:28 atinm joined #gluster
09:31 overclk joined #gluster
09:33 kdhananjay joined #gluster
09:37 overclk joined #gluster
09:38 hgowtham joined #gluster
09:40 nbalacha joined #gluster
09:41 Philambdo joined #gluster
09:45 SOLDIERz joined #gluster
09:45 eMBee joined #gluster
09:47 Trefex joined #gluster
09:48 eMBee i read in an article that glusterfs requires the number of servers to be a multiple of the replication factor. is that still the case?
09:51 msvbhat eMBee: Total number of bricks (backend disk) should be a multiple factor of replication factor
09:51 msvbhat eMBee: It can be hosted in any number of servers though, I mean two servers can host 4 bricks
09:52 eMBee oh, i see
09:53 soumya joined #gluster
09:53 dusmant joined #gluster
09:54 eMBee so bricks match to disks. but can i ensure that data is replicated over servers to protect against server outages (as opposed to disks dying)?
09:55 msvbhat eMBee: Yes, you should. In fact gluster volume create throws erros if you try to create replicate bricks in the same server
09:55 msvbhat But it doesn't stop you from creating if you really want to do so
09:56 msvbhat replication happens in the same order you specify the bricks(disks) in the volume create command
09:57 msvbhat for example if you run "gluster volume create testvol replica 2  server0:brick0 server1:brick1 server0:brick2 server1:brick3"
09:57 msvbhat Then server0:brick0 and server1:brick1 will be replica copies
09:57 msvbhat server0:brick2 and server1:brick3 will be other replica pair
09:58 eMBee ah, ok, i get it, so with two servers i can only have two copies of each brick, add a 3rd machine i can choose where the tw[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Co copies reside
09:58 eMBee and the replication is always a full brick? like a mirror raid?
10:00 msvbhat Yeah. replication is full brick
10:01 eMBee is there any point in combining gluster with raid?
10:01 msvbhat Bote that brick is just a directory, I mean you can have more than one brick in a single disk ( not recommanded)
10:01 msvbhat *Note that
10:02 msvbhat eMBee: Well, I think that depends on your use case. I am not an expert it
10:02 msvbhat But you *can* if you want to
10:03 * msvbhat goes to get some Tea
10:03 eMBee well, it's simply a question if more redundancy is better, or if there is a point where it doesn't add anything?
10:04 ndarshan joined #gluster
10:05 shubhendu joined #gluster
10:06 eMBee another thing i could not yet find in the docs: gluster runs on top of stnadard linux filesystems. but does it store the data in normal files? (with the original filenames?) can i access the files on the disk from eg a rescue mode which would not run gluster?
10:07 PatNarcisoZzZ joined #gluster
10:12 gildub joined #gluster
10:15 overclk joined #gluster
10:19 kshlm joined #gluster
10:22 anoopcs eMBee, Yes. files are stored on bricks as they are on an xfs or ext4 file system.
10:23 anoopcs eMBee, But those files will have additional extended attributes set by glusterfs.
10:25 atinm joined #gluster
10:25 autoditac joined #gluster
10:31 nbalacha joined #gluster
10:40 dusmant joined #gluster
10:45 kdhananjay joined #gluster
10:48 alexandregomes joined #gluster
10:57 eMBee anoopcs: thanks, the main concern is additional backup options in case things go bad...
10:57 aravindavk joined #gluster
11:06 Romeor joined #gluster
11:17 soumya joined #gluster
11:18 pppp joined #gluster
11:18 shubhendu joined #gluster
11:23 glusterbot News from newglusterbugs: [Bug 1243384] EC volume: Replace bricks is not healing version of root directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1243384>
11:25 overclk joined #gluster
11:26 dusmant joined #gluster
11:27 Pupeno joined #gluster
11:29 rafi1 joined #gluster
11:31 suliba_ joined #gluster
11:31 mrErikss1n joined #gluster
11:31 doctorra1 joined #gluster
11:31 saltsa_ joined #gluster
11:31 edong23_ joined #gluster
11:32 kanagaraj_ joined #gluster
11:32 shaunm__ joined #gluster
11:33 kdhananjay joined #gluster
11:37 rafi1 joined #gluster
11:37 rafi joined #gluster
11:37 kanagaraj_ joined #gluster
11:37 kdhananjay joined #gluster
11:37 rafi joined #gluster
11:49 bfoster joined #gluster
11:53 glusterbot News from newglusterbugs: [Bug 1243408] syncop:Include iatt to 'syncop_link' args <https://bugzilla.redhat.co​m/show_bug.cgi?id=1243408>
11:56 pdrakeweb joined #gluster
11:59 raghu joined #gluster
11:59 topshare joined #gluster
12:00 kshlm Gluster community meeting is starting in #gluster-meeting now.
12:03 natarej__ dgbaley, forgot to say thanks for sharing your data with us
12:04 jdarcy joined #gluster
12:07 shubhendu joined #gluster
12:09 B21956 joined #gluster
12:09 soumya joined #gluster
12:09 jtux joined #gluster
12:10 chirino joined #gluster
12:12 topshare joined #gluster
12:13 topshare joined #gluster
12:13 jrm16020 joined #gluster
12:18 ashiq joined #gluster
12:18 surabhi joined #gluster
12:24 jotun joined #gluster
12:25 unclemarc joined #gluster
12:26 ctria joined #gluster
12:27 dusmant joined #gluster
12:27 Romeor no, it was not the kernel issue with my d8 problems. having same issue today also. was just lucky enough yesterday :*(
12:30 pdrakeweb joined #gluster
12:30 lpabon joined #gluster
12:31 sadbox joined #gluster
12:34 jotun joined #gluster
12:35 ashiq joined #gluster
12:40 ajames-41678 joined #gluster
12:41 scuttle|afk joined #gluster
12:42 elico joined #gluster
12:43 shubhendu joined #gluster
12:43 aravindavk joined #gluster
12:44 topshare joined #gluster
12:46 ppai joined #gluster
12:51 DV joined #gluster
12:53 Twistedgrim joined #gluster
12:54 nsoffer joined #gluster
12:59 Romeor anyone from devs alive_
12:59 Romeor ?`
12:59 ashiq joined #gluster
12:59 Romeor ndevos JoeJulian  ?
13:00 julim joined #gluster
13:00 Manikandan joined #gluster
13:08 elico left #gluster
13:16 mpietersen joined #gluster
13:19 arcolife joined #gluster
13:21 ekuric joined #gluster
13:23 bene2 joined #gluster
13:27 Manikandan joined #gluster
13:29 georgeh-LT2 joined #gluster
13:29 jcastill1 joined #gluster
13:34 jcastillo joined #gluster
13:36 shyam joined #gluster
13:40 DV joined #gluster
13:44 dgandhi joined #gluster
13:46 ashiq joined #gluster
13:49 kanagaraj_ joined #gluster
14:03 shubhendu joined #gluster
14:04 bene2 joined #gluster
14:04 dusmant joined #gluster
14:06 hagarth joined #gluster
14:11 nbalacha joined #gluster
14:11 liewegas joined #gluster
14:16 bennyturns joined #gluster
14:17 shyam joined #gluster
14:19 al joined #gluster
14:20 theron joined #gluster
14:21 theron joined #gluster
14:21 vimal joined #gluster
14:24 topshare joined #gluster
14:30 jbautista- joined #gluster
14:30 topshare joined #gluster
14:31 smohan joined #gluster
14:31 topshare joined #gluster
14:32 rwheeler joined #gluster
14:33 dusmant joined #gluster
14:41 plarsen joined #gluster
14:48 shyam joined #gluster
14:51 jobewan joined #gluster
14:54 shaunm_ joined #gluster
14:56 overclk joined #gluster
14:57 elico joined #gluster
14:59 topshare joined #gluster
15:01 hchiramm_home joined #gluster
15:12 yoavz joined #gluster
15:16 cyberswat joined #gluster
15:20 topshare joined #gluster
15:21 mckaymatt joined #gluster
15:22 kdhananjay joined #gluster
15:32 mpietersen does anyone know if the NFS daemon listed when running 'gluster volume status' is necessary?
15:32 mpietersen i can't mount the volume locally without nfs running, but the nfs daemon is instructed to start when the glusterd process starts
15:32 kkeithley only if you want NFS
15:32 mpietersen well, i'd like to export the volume as nfs for the clients that will be connecting to the volume
15:33 mpietersen i've just had issues before when I install NFS and start it, it fails because the process is already running (glusterd starts this when it starts)
15:34 mpietersen i've read through some of the mail logs, and I can't have multiple instances of NFS running at once
15:34 kkeithley There should be a glusterfsd process, that's the server for the "native" protocol.  you should be able to mount that with `mount -t glusterfs $hostname:$volname $mntpoint`
15:35 mpietersen i've tried that, but because I haven't installed NFS on this server for specifically that reason it fails because rpcbind isn't running
15:36 mpietersen when i start rpcbind manually, it still fails to mount
15:36 kkeithley that's correct. You can't run kernel NFS and gnfs at the same time.
15:36 mpietersen rpcbind: Cannot open '/var/lib/rpcbind/portmap.xdr' file for reading, errno 2 (No such file or directory)
15:36 kkeithley you don't need rpcbind to mount "native" gluster protocol.
15:36 mpietersen hrm
15:37 mpietersen not sure why it's not mounting then
15:37 mpietersen the logs complain about rpc
15:37 kkeithley I'm probably wrong about rpcbind then.
15:37 kkeithley although that might just be because it's running gNFS.
15:39 kkeithley Linux I presume. Which distribution?
15:39 mpietersen cent7.2
15:39 mckaymatt joined #gluster
15:42 mpietersen i guess i really don't need gluster to export the volume if I can export it with NFS from the kernel, i just wasn't sure if it would track file level changes for my geo-replication
15:43 kkeithley you can't export gluster volumes with kernel NFS
15:44 mpietersen ehhhh
15:44 mpietersen - /data/brick1/gv0 on /glusterfs type nfs (rw,addr=x.x.x.x)
15:44 mpietersen i just did
15:45 dgbaley mpietersen: You're all sorts of screwed up =)
15:45 mpietersen i have to add root_squash to that I guess
15:45 dgbaley You shouldn't have any NFS-related packages installed on your gluster servers
15:45 mpietersen so if I remove NFS, would that fix the problem?
15:45 mpietersen i figured i was screwed up, but getting help here has been hit and/or miss
15:45 mpietersen mostly miss
15:46 dgbaley That nfs daemon that's started and shows in "volume status <name>" means your server is already providing an NFSv3 server
15:46 mpietersen right
15:46 mpietersen and i know to enable nfs.export on the volume
15:46 dgbaley So your client, on another system, does an NFS mount like normal, and you point it to any of your gluster servers
15:46 mpietersen i've had it working before, but at this point i've had to rebuild so many times I've lost track of 'working vs. non-working'
15:46 dgbaley nfs.export isn't necessary either
15:47 dgbaley That's for further access control, you should make sure your setup is working first before you start limiting clients through that setting.
15:48 mpietersen ok
15:55 mpietersen so when i go to remove nfs-utils, it claims glusterfs-geo-replication and glusterfs-server are dependencies
15:55 mpietersen any other ideas dgbaley ?
15:58 dgbaley Ah, I'll soften what I said, leave nfs-utils on your system, but make sure there aren't any services set to start -- is this (rh)el or ubuntu?
15:58 mpietersen cent7.2
15:58 hagarth joined #gluster
15:59 mpietersen if nfs isn't started, it still fails to mount complaining about rpcbind
15:59 mpietersen using fuse or nfs
16:02 dgbaley What's your mount command?
16:03 mpietersen mount -t glusterfs master:/data/brick1/gv0 /glusterfs
16:03 dgbaley If it helps, this is my systemd configuration. Looks like somewhere along the line, rpcbind.socket got activated: http://fpaste.org/244688/
16:03 dgbaley That's vanilla centos7.2 the only things I enabled were glusterd and nfs
16:04 mpietersen i used a minimal install iso
16:04 dgbaley Same here
16:04 mpietersen i actually just ran the status, and in the log it looks like glusterd started rpc itself
16:04 mpietersen but my connection just shit-canned so I've got to reconnect
16:05 dgbaley If your brick name is "gv0" then the correct mount command would be "mount -t glusterfs <any-host>:gv0 /glusterfs
16:05 calavera joined #gluster
16:06 dgbaley It's wrong to specify any particular  brick
16:06 mpieters_ joined #gluster
16:08 mpieters_ joined #gluster
16:09 mpietersen joined #gluster
16:09 mpietersen so, with nfs stopped
16:10 mpietersen joined #gluster
16:11 mpietersen joined #gluster
16:12 cholcombe joined #gluster
16:13 calisto joined #gluster
16:13 mpietersen joined #gluster
16:14 calisto joined #gluster
16:14 mpietersen joined #gluster
16:16 mpietersen joined #gluster
16:17 soumya joined #gluster
16:18 calisto joined #gluster
16:21 coredumb joined #gluster
16:21 msp3k joined #gluster
16:24 mpietersen joined #gluster
16:24 mckaymatt joined #gluster
16:25 mpietersen hopefully our wifi is don'e with it's temper tantrum
16:26 mpietersen I think i may have gotten it to work by stopping nfs with glusterd stopped as well
16:27 mpietersen at least mounting via nfs
16:31 calisto joined #gluster
16:32 mpietersen ok, that worked in regards to mounting it, but gluster volume geo-replication status shows the volume crawl status as n/a
16:32 mpietersen i would assume that this isn't going to let it see file changes and push the files to the slave volume
16:32 dgbaley I can't help you with that, I've not even read about geo-replication let alone tried it
16:33 mpietersen heh, well thanks for the help you did provide
16:33 mpietersen that is the only reason we want to use glusterfs
16:34 mpietersen we have a slow vpn link and have to copy a bunch of files over
16:34 mpietersen drdb file locked and took down our nas, so something less IO intensive is needed
16:35 mpietersen aka, async
16:36 meghanam joined #gluster
16:38 calisto joined #gluster
16:40 mckaymatt joined #gluster
16:45 smohan joined #gluster
16:50 PatNarcisoZzZ joined #gluster
16:54 craigcabrey joined #gluster
16:55 uebera|| joined #gluster
16:57 firemanxbr joined #gluster
16:57 kkeithley mpietersen: you must write from the clients, i.e. the nfs mount or gluster native mount, if you want gluster to geo-replicate the files to the remote replica. If you're writing directly to the brick (/data/brick1/gv0) then you're doing it wrong.
16:58 firemanxbr hy guys I have one gap using glusterd in my CentOS 7.1
16:58 mpietersen i know not to write directly to the brick itself, i mucked that up last week
16:58 calisto joined #gluster
16:58 firemanxbr when reboot my server, this service 'glusterd' dont start
16:58 mpietersen i did get gNFS to start properly on my master node, however gNFS fails on my slave node
16:58 mpietersen and when i bring the master/slave volume online it immediately goes to faulty
16:58 firemanxbr but I run, after reboot, 'systemctl start glusterd' he normaly running...
16:59 firemanxbr anyone idea about this problem ?
16:59 mpietersen it could be that your service start order is conflicting with glusterd
16:59 dgbaley firemanxbr: I have the same problem, but I haven't had a chance to look at it too hard yet. I think it's a race condition with networking
16:59 firemanxbr mpietersen, I agree, but how-to fix this problem ?
17:00 kkeithley did you enable glusterd with `systemctl enable glusterd` ?
17:00 firemanxbr kkeithley, yep: systemctl is-enabled glusterd (enabled return)
17:01 firemanxbr kkeithley, I'm usgin fiber connection in my network devices
17:01 uebera|| joined #gluster
17:01 firemanxbr kkeithley, I believe this delay, Up networking, gerenate my problem :(
17:02 dgbaley firemanxbr: I was going to try and put all of the nodes in /etc/hosts and see if that fixes it, but I can't reboot right now
17:03 firemanxbr dgbaley, I dont problem this it, I use directly connections on Ips
17:05 overclk joined #gluster
17:07 dgbaley firemanxbr: ah, okay. does your log look like this: http://fpaste.org/244718/
17:07 firemanxbr dgbaley, what versions you use ?
17:08 dgbaley 3.7.2 on Centos 7
17:08 firemanxbr dgbaley, I'm using 3.7.2 and CentOS 7.1
17:08 dgbaley That's the systemd-journal btw, not the log from /var/log/gluster
17:08 firemanxbr dgbaley, glusterfs-server-3.7.2-3.el7.x86_64
17:11 msp3k left #gluster
17:13 uebera|| joined #gluster
17:15 mckaymatt joined #gluster
17:15 dgbaley firemanxbr: I'm rebooting now, maybe looking at the bootchart will help. Even using IPs instead of hostnames doesn't preclude the daemon from starting before networking is ready
17:16 dgbaley firemanxbr: also are you sure you want to use IPs for your brick names? There's a lot of benefit to using hostnames, it gives you flexibility with how your clients connect
17:17 firemanxbr dgbaley, humm
17:17 pppp joined #gluster
17:21 firemanxbr dgbaley, for me no problem, my problem is that glusterd dont start after reboot process
17:22 dgbaley Sure, same here, I asked if you have this line in your logs: [glusterd-server-quorum.c:356:gl​usterd_do_volume_quorum_action] 0-management: Server quorum lost for volume openstack. Stopping local bricks
17:22 dgbaley To me, that means it's starting before the network is ready
17:22 jdossey joined #gluster
17:23 dgbaley The bootchart shows glusterd starting just after network.target is ready, but there could still be a race condition. I've been meaning to strip NetworkManager off the system anyway, and am wondering if it's related
17:26 firemanxbr dgbaley, I dont idea about this
17:26 firemanxbr dgbaley, I trying solve my fiber with sfp+ delay here
17:27 dgbaley You think glusterd isn't starting because you're using SFP+ ports? I'm using QSFP and I really don't think that's the problem
17:29 firemanxbr dblack, for me he ping 2 or 3 times, generate a delay, after return
17:29 * firemanxbr sorry, this msg for <dgbaley>
17:31 jiffin joined #gluster
17:33 gem joined #gluster
17:34 gem joined #gluster
17:37 gem joined #gluster
17:39 gem joined #gluster
17:41 gem joined #gluster
17:47 cyberswat joined #gluster
17:49 uebera|| joined #gluster
17:57 uebera|| Hi there. I just tried to upgrade to v3.6.4 on Ubuntu 14.0.4, but none of the two server nodes would come up. I tried to recompile the packages myself, but I always see the following in the logs:
17:57 uebera|| [2015-07-15 17:53:39.558165] D [rpc-transport.c:300:rpc_transport_load] 0-rpc-transport: dlsym (gf_rpc_transport_reconfigure) on /usr/lib/x86_64-linux-gnu/gluste​rfs/3.6.4/rpc-transport/rdma.so: undefined symbol: reconfigure
17:57 uebera|| [2015-07-15 17:53:39.558377] W [rdma.c:4440:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
17:58 jiffin uebera||: are u using rdma volumes??
17:58 uebera|| Before I downgrade--apart from preserving the glusterd -LDEBUG logs, is there something else I can try?
17:58 glusterbot uebera||: downgrade's karma is now -1
17:59 jiffin uebera||: it seems to  missing of rdma packages
17:59 jiffin uebera||: if u are not using rdma , then it is not a problem
17:59 gem joined #gluster
18:00 uebera|| Another line (this looks incomplete) shows:
18:00 uebera|| [2015-07-15 17:53:42.919069] E [socket.c:3013:socket_connect] 0-management: connection attempt on  failed, (Connection refused)
18:01 uebera|| (note the two whitespace between "on" and "failed"--something seems to be missing there)
18:01 glusterbot uebera||: failed 's karma is now -1
18:05 jiffin I think it should print information  about peer(propably ip)
18:06 jiffin uebera||: can check gluster peer status
18:07 uebera|| No, I can't do anything: gluster peer status would show "Connection failed. Please check if gluster daemon is operational."
18:10 jiffin uebera||: your glusterd is not operation , start glusterd service
18:11 uebera|| The thing is, it dies (too fast to get anything out of it).
18:12 uebera|| I've got the -LDEBUG output, and that's it.
18:12 jiffin uebera||: hmmm
18:13 jiffin uebera||: i am not familiar with how to use gluster in ubuntu
18:14 hagarth uebera||: what other logs do you observe?
18:14 hagarth in -LDEBUG
18:15 uebera|| Let me replace the IPs, then I can uploade the logs.
18:24 uebera|| Here you go --> http://pastebin.com/LCtZp4dj
18:24 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:25 hagarth uebera||: do you have ssl enabled?
18:25 uebera|| yes
18:25 Philambdo joined #gluster
18:25 hagarth that seems to be causing a crash.. there was one recent fix in that area
18:26 uebera|| Alternative URL --> http://paste.ubuntu.com/11884049/
18:26 hagarth uebera||: this patch fixes the issue in master - http://review.gluster.org/11650
18:26 uebera|| Well, 3.6.4 is rather new. And 3.6.3 worked.
18:27 uebera|| Thanks for the link. I can try to include that.
18:28 uebera|| But shouldn't 3.6.4 be replaced ASAP in that case? (I fear I'm once again the first to encounter this, but maybe not the only one :p)
18:28 hagarth uebera||: yes, I will drop a note to release-3.6 maintainer and see if we can do 3.6.5 soon. Not sure if it is a regression or a race, in any case thanks for the report!
18:35 uebera|| I'm rebuilding the .deb packages with the cherry-picked patch now...
18:37 cuqa_ joined #gluster
18:46 ueberall joined #gluster
18:49 uebera|| I can confirm that the cherry-picked patch fixes my problems. Rebooting the servers worked flawlessly. :)
18:53 Romeor https://bugzilla.redhat.co​m/show_bug.cgi?id=1242913 still need help
18:53 glusterbot Bug 1242913: high, unspecified, ---, rhs-bugs, NEW , Debian Jessie as KVM guest on GlusterFS backend
18:55 _maserati_ Romeor: still having that same prob?
18:55 Romeor yes
18:56 _maserati_ im really interested in knowing whats going on when you figure it out
18:58 Romeor me also. but i don't have enough debugging skills to find out it myself. do not have any support from debian, proxmox nor gluster :(
18:58 Romeor the most terrible is that it happens only with d8 :(
19:02 bennyturns joined #gluster
19:05 JoeJulian 1) D8 fails. Other distros and versions succeed. 2) GlusterFS is a posix file interface. 3) Proxmox is an html interface to D8's libvirt+kvm
19:06 JoeJulian So if qemu-kvm uses a gluster hosted file as a block device successfully, there's nothing different regardless of the guest.
19:06 uebera|| Romeo: From the screen shot, could there be a problem with your Perl installation? Is there a possibility to bypass this locally (do you know about perlbrew)?
19:06 uebera|| Sorry, Romeor: (see above)
19:06 jdarcy joined #gluster
19:08 Romeor JoeJulian: i understan this logis and it seems to be like true... but... d8 works on ovirt :(
19:09 Romeor uebera||: there is only one problem with installation of d8. files are getting corrupted.
19:09 JoeJulian ovirt does support libgfapi and will use it by default. Not sure if proxmox does.
19:09 Romeor JoeJulian: proxmox does
19:09 Romeor by default
19:10 JoeJulian So we have the same gluster support either way. One works and the other doesn't. So that rules out gluster. So do they create the same libvirt VMs? That's where the dumpxml would help.
19:10 Romeor actually i'm already sure, that the problem is not on glusterfs :*( and its trully pitty.. cuz u guyz at least try to help
19:11 uebera|| Romeor: You're installing d8 from scratch? If d7 works, did you consider updating the instance instead?
19:11 JoeJulian Interesting idea.
19:11 JoeJulian I suspect a kernel bug in the installer image.
19:11 JoeJulian A d7 upgrade would get around that.
19:12 Romeor yes, i install from netinstall and dvd isos. if i instsall from dvd iso, installation goes OK, but at the end i've got unusable system, it just won't boot, as on screenshot.
19:12 uebera|| Some days ago, I saw this --> https://www.skelleton.net/2015/05/04/upgr​ading-debian-guests-on-proxmox-to-jessie/
19:14 JoeJulian I still think it's the cache method of the VM.
19:15 Romeor JoeJulian: me, as advanced user (very advanced, but not a developer) was thinking the same.. but it seems noone cares.. do you have some way to contact d8 dev team?
19:15 Romeor Proxmox uses none as cache by default
19:15 Romeor uebera||: that article is about openvz
19:16 Romeor openvz is dead and i try to move all containers to kv
19:16 Romeor m
19:17 calavera joined #gluster
19:17 Romeor the most terrible thing.. i was a real fan of d8 all the way around (starting of woody)... now it seems like peace of... crap.
19:17 uebera|| I see. Personally, I use lxc for all Linux-based VMs w/o any problems.
19:18 uebera|| s /VMs/containers/
19:19 Romeor i'll give it a try for cache write through for d8. its was the default cache for qemu afaik
19:21 cyberswat joined #gluster
19:27 Rapture joined #gluster
19:34 jmarley joined #gluster
19:35 theron joined #gluster
19:45 Romeor ok. am now going to install two new VMs: 1 with wright throu caching for d8 guest and one withou cache (like i always did) for c7. both VMs got new numbers, so their directories are not created yet on glusterfs (i suspect that i could install other VM successfully cuz i used same numbers (old) and there was no that self-heal process behind). we'll see what will happen
19:47 Romeor may be there is other major bug between glusterfs and qemu that i was discovered lots version ago with gluster 3.4 and some1 from devs on mail list advice me to turn off the self heal.
19:47 Romeor or it was other type of heal...
19:48 Romeor if i remember right, it was self heal :S the one that one can turn off per volume
19:48 md2k joined #gluster
19:48 md2k Hi All
19:50 md2k have question related to glusterfs self-heal procedure, but didnt found anything in docs. there is any way to disable self-heal from client-side connections, have performance issues during those operations
19:51 md2k gluster version : 3.6.3-ubuntu1~trusty10
19:52 mpietersen joined #gluster
19:53 Romeor its like gluster volume set help
19:53 Romeor Option: cluster.self-heal-daemon Description: This option applies to only self-heal-daemon. Index directory crawl and automatic healing of files will not be performed if this option is turned off.
19:54 Romeor md2k: it was answer for u
19:55 calavera joined #gluster
19:59 rwheeler joined #gluster
20:02 md2k Romeor, yeah but it look like disable entry self-heal, what i won't to do. i want prevent client from trigger self-heal operations
20:04 Romeor but.. if you turn it off per volume than it seems like it is what you trying to get... there was another healing solution that will keep working afaik. devs could explain better or i could chek my mailings
20:06 Romeor oh, my mail with devs advises is deleted :(
20:06 md2k happens :)
20:08 md2k just face into problem when client connected to cluster trigger self-heal on folders all my php/nginx processes falling into uninterruptible sleep until heal operation for this folder not finished (folder include ~150k files, images )
20:09 md2k *faced
20:10 Romeor then turn off self heal for this volume
20:10 Romeor it doesn't mean that you lose HA possibilities
20:10 Romeor afaik :)
20:10 md2k hm
20:11 DV joined #gluster
20:12 JoeJulian md2k: As long as you understand the ramifications of disabling client-side self-heal, it's certainly a valid option.
20:13 Romeor HA! found it!
20:13 Romeor Mounts will do the healing, not the self-heal-daemon (c) Pranith Kumar Karampuri <pkarampu@redhat.com>
20:14 Romeor so just turn it off. /me is thinking to do the same for my VMs...
20:15 md2k Romeor in my case actually seems mounts with their healing is a problem
20:15 Romeor then ask JoeJulian  :)
20:15 Romeor hes a brain here. really.
20:16 Romeor also ndevos
20:16 md2k example : start time of operation [2015-07-15 18:36:56.127551] I [afr-self-heal-entry.c:554:afr_selfheal_entry_do] 2-gfs_uploads-replicate-0: performing entry selfheal on 4935eae9-84a5-4f9b-9b50-866e7bc82b8d
20:16 md2k and end of operation : [2015-07-15 18:46:34.709644] I [afr-self-heal-common.c:476:afr_log_selfheal] 2-gfs_uploads-replicate-0: Completed entry selfheal on 4935eae9-84a5-4f9b-9b50-866e7bc82b8d. source=1 sinks=0
20:17 Romeor if i understand that nickname right, it one of devs also. niels de vos
20:17 Romeor to me it seems like self heal
20:17 md2k ~10 min for heal operation this gfid points to folder with 150k files.. during this operation all my nginx/php dead , and this is log file from client side where volume mounted
20:17 Romeor just turn it off and see.
20:17 redbeard joined #gluster
20:19 md2k JoeJulian: have any idea about my problem?
20:19 Romeor don't pay much attention to me tonight .. i'm drinking Argentinian wine and may say some stupid things :D
20:19 md2k :)
20:20 Romeor but really, i'd like to have same community places for other projects... where you can speak with devs eyes-to-eyes :D wish i had same place for debian
20:21 Romeor some time ago proxmox had same community.. then the moved to commercial basis and now just ignore community forum if one doesn't have subscription ..lol
20:21 Romeor but that other story. i'll keep my testings better
20:22 md2k testings and wine.... hope all will go well enough :)
20:22 _maserati_ opensource isn't the easiest to make a living off i guess. I wish companies were better at contributing financially to those projects we rely on. I've been talking to my bosses to try and cut a check for the gluster team
20:23 bennyturns joined #gluster
20:23 _maserati_ md2k: don't say it as such a bad thing! give me a few beers and i'll hammer out projects left and right :P
20:26 Romeor _maserati_: take a look at VLC! !! it is just the best example of OpenSource projects that lives like God and devs earn enough money.. even more than enough :) and they just got a small donation banner on their webpage.. but how the hell many commercial projects were forked by same devs for companies...
20:27 Romeor and they started like typical OS project :)
20:27 Romeor and gave lots of support on forums
20:27 Romeor now they still give lots of support on community forums, but earn money elso
20:27 Romeor also*
20:28 Romeor i'm talking about proxmox atm. i like Gluster support  :)
20:28 Romeor the only problem is time.. US and europe
20:29 Romeor another fine synergy between community and commercials is owncloud
20:30 Romeor so i can't share your opinion about hard life of open source projects :)
20:30 Romeor the only thing counts here is attitutde.
20:30 Romeor attitude*
20:32 dgandhi joined #gluster
20:32 ghenry joined #gluster
20:32 ghenry joined #gluster
20:33 Romeor if someone creates an OS project to earn money - it will fail. If someone creates an OS project to share some knowledge, skills and idea and keeps to be fan of this, it will success
20:35 Romeor but nowadays the most pop thing is to create an OS project, get large enough community and create a parallel fork of it to sell for the cost of community contribution. this makes me mad like hell (red hat is here.. yes :D)
20:37 Romeor JoeJulian: ok. here are my test results: centos7 installed fine without any cache enabled. d8 once installed fine with write through another itme failed on mirror selection step, like always. this makes me think like you, that is is the d8 bug itself, but where exactly - IDK !!! :*(
20:38 Romeor OR! there is a bug between gluster and qemu that corrupts data.
20:40 autoditac joined #gluster
20:41 md2k1 joined #gluster
20:41 md2k left #gluster
20:41 Romeor JoeJulian: https://romanr.eu/owncloud/​index.php/s/ajsjemvVryLJfig heres an error.. again perl step. something corrupted during installation.
20:41 JoeJulian if the latter was true, it would corrupt data for everything
20:42 Romeor agree
20:43 Romeor so i have to go and try to knock on debian's door i guess?
20:43 JoeJulian Can you make an iso with a newer kernel?
20:44 JoeJulian Just to see if it's a kernel issue
20:44 Romeor its the latest 8.1
20:44 JoeJulian There is no 8.1
20:44 Romeor whoot?
20:44 Romeor https://www.debian.org/News/2015/20150606
20:45 JoeJulian The very latest kernel is 4.1.2
20:45 Romeor oh... lol :D i'm speaking of debian 8.1
20:47 Romeor i could try debian testing or the worse - unstable
20:47 JoeJulian I don't see how it could matter, but I know that I had a problem with a Fedora ISO, once, where it couldn't recognize my disk controller. I had to upgrade the kernel in the ISO in order to install.
20:47 JoeJulian Good idea. Worth a try to see if it's just something specific to that version.
20:48 Romeor hmm.. sounds like fine idea about changing the kernel, but i don't believe such step would work with debian.. even more - i never did such things before :(
20:48 JoeJulian I hadn't before with Fedora, either, nor have I since. Just that one release had a kernel with a bug.
20:50 _maserati_ you could just ditch debian and go with something a little better =P
20:50 Romeor debian was the best distro till 8
20:51 Romeor with centos you cant just that simply switch from 6 to 7 like in debian from 6 to 7 .. just an apt-get dist-upgrade
20:52 badone joined #gluster
20:52 _maserati_ cents upgrade process isn't very hard at all
20:52 Romeor ubuntu is another alternative.. specially now, when debian changed its init system
20:52 _maserati_ booooooooo ubuntu
20:53 Romeor _maserati_: yes, it is not hard.  just backup data and clean isntall of newer version :D
20:54 Romeor ok, goingt to test debian testing
20:57 ndevos Romeor: have you tried to install on a VM with a different disk-controller? instead of virtio, maybe try ide or scsi?
21:00 Romeor ndevos: nope, i didn't. but i had similar thoughts about corrupted virtio drivers in d8 :)
21:00 theron joined #gluster
21:01 Romeor will try as my next step.
21:01 ndevos Romeor: yeah, that would be interesting to know, maybe the version of qemu just does not like the debian 8 drivers, or the other way around
21:02 Romeor deem, how i didn't come this idea myself
21:02 Romeor thanx a lot ndevos
21:03 Romeor do i understand right, that you are the guy with name Niels de Vos? :))
21:03 ndevos hah, yes :)
21:03 Romeor nice to meet you. :)
21:03 ndevos hey, same to you!
21:03 Romeor get an OP status for yourself here :)
21:03 JoeJulian I just encountered that with ubuntu and virtio network drivers.
21:04 Romeor JoeJulian: tell me more
21:05 JoeJulian Was a known bug with the 14.04 ubuntu kernel we were using. Periodically it would reset itself. Since I wanted an arch boot anyway, I just rebuilt the init to use the latest arch kernel.
21:05 Romeor so cool to meet people live (irc i mean) with whom i had so many conversations on list :) the one is missing - Kumar
21:06 Romeor JoeJulian: hmmm (/me is trying to create a smart face after one bottle of red wine) ok. :D
21:06 JoeJulian hehe
21:06 ndevos which Kumar, I think that is a common middle/familiy name
21:06 CyrilPeponnet joined #gluster
21:07 Romeor ndevos: Pranith Kumar Karampuri
21:07 CyrilPeponnet joined #gluster
21:07 Romeor JoeJulian: I really should get familiar with changing kernels on iso files and inits... seems like pretty useful thing
21:08 ndevos oh, thats pranithk, if he's online, its mostly earlier, lives in Bangalore
21:08 JoeJulian I think from here, I'd say mostly later. :)
21:09 ndevos hehe, yes, for you for sure
21:10 CyrilPeponnet joined #gluster
21:10 _maserati_ stop excess flooding then!
21:10 CyrilPeponnet joined #gluster
21:10 Romeor but well, at least I can say that to you guys - GlusterFS is a great product. keep it live :) beat the ceph
21:10 CyrilPeponnet joined #gluster
21:11 Romeor _maserati_: relaax. i didn't even start to .. yet
21:11 ndevos cool, thanks :)
21:11 _maserati_ was joke @ CyrilPeponnet
21:12 JoeJulian _maserati_: With legal weed down there you should be able to be chill about flooding... ;)
21:12 Philambdo joined #gluster
21:12 _maserati_ trust me, im chill ;)
21:12 Romeor i had pretty huge holy war here in my company to decide to glusterfs... against 2 other admins/technician guys ... and i win :P
21:12 JoeJulian Someone needs to make that graphic.
21:12 Romeor _maserati_: i understood that it was joke :)))
21:12 ndevos nice to hear, and well done Romeor :)
21:14 ndevos actually, Ceph is quite cool too, but I like the simplicity of Gluster more
21:14 ndevos and C++ isnt my thing, I prefer C
21:14 glusterbot ndevos: C's karma is now 2
21:14 _maserati_ I prefer C
21:14 _maserati_ fine, don't count my vote
21:14 Romeor glusterfs - is the thing well done :) those guys could not say anything against RedHat comparison  graphs of performance between ceph and gluster.. and yes.. simplicity of glusterf just beats a lot
21:14 ndevos no, _maserati_ you dont like C++ ? :D
21:14 glusterbot ndevos: C's karma is now 3
21:15 md2k ;)
21:15 _maserati_ thanks :D
21:15 Romeor C++ is language of.. guess what :D of well done marketing
21:15 glusterbot Romeor: C's karma is now 4
21:16 Romeor true C is only way to go. comrade linus torwalds said a lot about those two
21:16 bennyturns joined #gluster
21:17 _maserati_ I wish I were more gooder with C, but I'm not so clever, so i'm stuck in C++ land
21:17 glusterbot _maserati_: C's karma is now 5
21:18 Romeor i don't know any of them. really. but i've read about both a lot.. i just want to learn some language, but do not know what do i start from :P
21:19 _maserati_ I don't know if this is a common sentiment... but i absolutely adore handling my own memory addresses
21:19 Romeor and i don't have enough patience to learn and practice the syntax of any language
21:19 JoeJulian It must be getting close to Friday, because when _maserati_ said that it sounded dirty.
21:19 _maserati_ sounds like a java guy in the making to me :)
21:20 _maserati_ Wow, it does, doesn't it?
21:21 Romeor what do you guys do at Fridays ? i'm starting to meet sunday :D
21:21 Romeor just lots of tequila, beer and mexican food
21:22 md2k JoeJulian: correct me please, if i turn cluster.data-self-heal/entry-​self-heal/metadata-self-heal  it should disable any heals from mount process ?
21:22 _maserati_ beer, beer, beer, bars, trucks, beer, video games, pass out
21:22 Romeor and.. i enjoy my muay-thai training practice on fridays a lot :D
21:23 _maserati_ Is... is that a mixed drink?
21:24 Romeor well.. almost.. at least it feels like almost same after an elbow to head or high kick .. specially when i do them to opponents
21:24 Romeor but sometimes its fine to get some of them too...  :)
21:25 md2k ^ if wi will turn them off. (to my prev message :) )
21:25 _maserati_ Well when i need to kick someone in the head I practice the ancient art form of "Tequila"
21:26 JoeJulian md2k: correct
21:27 Romeor _maserati_: so you just put on some1's head a helmet and then hit his head with fire extinguisher twice?
21:27 Romeor thats named tequila boom
21:27 md2k JoeJulian: thanks
21:27 JoeJulian I don't care if there's off topic chatter, just make sure questions get answered.
21:28 Romeor JoeJulian: oh.. yes.. the offtopic thing... is it ok to chat here_
21:28 JoeJulian I'll be afk for a bit.
21:29 JoeJulian As long as it's clean and friendly and doesn't make anybody afraid to ask their question. Also don't let the questions take a back seat to chatting.
21:30 Romeor every1 is welcomed to #gluster-chat to have fun  :D
21:32 Romeor haha.! debian testing installation was successful on 2 of 4 nodes, which recently failed debian stable
21:33 Romeor so now i can say officially, it seems like its not glusters fault
21:33 Romeor now i'll try to exclude the virtio thing
21:35 Romeor but fist i'll start the mate installation to check it once more. it has lots of packages to install and the possibility of something getting corrupted is pretty high
21:36 Romeor but DAMN i like glusterfs performance. ... its like it was local sas disks... (we ran glusterfs on 10k rpm sas)
21:36 Romeor and using 10g backend also
21:39 _maserati_ I have gluster controlling a 3PAR SAN backend. is very fast. is very nice
21:40 _maserati_ also, i am sick and tired of having seattle's weather in colorado! Torrential down pours every. single. day. i want my sun back :(
21:40 Romeor _maserati_: hah. you are speaking about weather in estonia 90% of summer time
21:41 _maserati_ but its colorado! we're supposed to have happy fun sun time in the summer!
21:41 Romeor _maserati_: by the way.. just curious, as an american guy, how would you classify my english? :D
21:42 _maserati_ before the wine, almost undetectable. After the wine, I could tell =P
21:42 Romeor before my wine or yours? :D
21:42 _maserati_ Yours
21:42 calavera joined #gluster
21:43 nsoffer joined #gluster
21:43 Romeor deem, another reason to quit drinking
21:43 Romeor thanx anyway
21:43 _maserati_ Oh no, I would condone no such thing
21:46 _maserati_ What kind of life is that to not drink?
21:48 Romeor _maserati_: idk :( so i do
21:48 _maserati_ idk either, and i dont wanna know
21:49 Romeor deem, mate installation failed on both. but idk if it is testing related or same problem
21:49 _maserati_ different disk controller?
21:50 Romeor https://romanr.eu/owncloud/​index.php/s/ejGkZFZBczhL3dC
21:50 Romeor going to try that too
21:51 calavera joined #gluster
21:53 Romeor according  to this: https://bugs.debian.org/cgi-​bin/bugreport.cgi?bug=779057 it comes when some device in fstab is missing. seems like related to corrupted data
21:53 _maserati_ screenshot seems to point towards it timing out trying to read data
21:53 _maserati_ i'd try the other disk controllers like Joe mentioned earlier
21:54 Romeor going to test it now ...
21:55 Romeor at least i've got another bottle
21:55 Romeor shiraz malbec mix just makes me crazy
21:55 _maserati_ mail me some
21:55 Romeor seems like your e-mail server won't accept that much data
21:56 _maserati_ press Send harder!
21:56 Romeor i like to press it press it ... (i like to move it move it motive )
22:00 Romeor interesting .. i did reboot and typed dpkg --reconfigure -a and this time it went ok, but... same failed step as always: it just can't normally install the python-gtk2 from repository. when i do apt-get download python-gtk2 and then dpkg -i it, it goes OK
22:00 dijuremo joined #gluster
22:01 _maserati_ I still think it may be io timing out
22:02 shyam joined #gluster
22:02 dijuremo Hi everyone, trying to find some help with a gluster setup. Have to servers as replicas hosting samba and windows clients using roaming profiles. Logins are so slow because it seems gluster is taking forever to access small files. What can I do to improve performance?
22:03 Romeor ok. now won't waste more time. going to test one with scsi and another one with sata
22:03 _maserati_ Are your replicas local to eachother?
22:04 dijuremo Yes, and connected via 10Gbps adapters
22:05 dijuremo I have 3 gluster volumes in these systems. Samba is using the vfs gluster plugin
22:05 JoeJulian samba & windows is probably related to oplocks. They're unbelievably slow and I had to disable them when I set that up a number of years ago. It's supposed to be better with the vfs, but I haven't tested that.
22:06 dijuremo If I use the vfs modules, I should not have to manually mount files using direct-io=disable, right?
22:07 dijuremo I have found some people pointing at mounting the gluster volume with option direct-io=disable to increase performance
22:07 Romeor _maserati_: i'm thinking here.. what should i change.. the disk driver or network driver.. they are both virtio atm :)
22:07 _maserati_ disk
22:08 Romeor mkay.
22:08 JoeJulian dijuremo: correct. VFS doesn't use a fuse mount.
22:10 dijuremo joined #gluster
22:10 calisto joined #gluster
22:11 dijuremo @JoeJulian: I have oplocks turned off
22:11 dijuremo oplocks = no
22:11 dijuremo level2 oplocks = no
22:12 JoeJulian Well, that exhausts my knowledge about that.
22:12 dijuremo Even getting samba out of the cuestion, doing ls -lR is superslow the first time
22:13 _maserati_ how many files are you talking?
22:14 dijuremo few thousand...
22:14 calisto joined #gluster
22:15 dijuremo So first time around:
22:15 dijuremo [root@ysmha01 dijuremo]# time ( find .winprofile.V2 -type f | wc )
22:15 dijuremo 336     781   32466
22:15 dijuremo real 0m3.434s
22:15 Romeor heheh.. sata drivers performs a bit faster then scsi according to installation progress
22:15 dijuremo Second time, it drops to:  real 0m0.371s
22:16 dijuremo And this is just for a meager 336 files in a user with a new roaming profile
22:16 Romeor oops sorry to interrupt u dijuremo
22:16 dijuremo @Romeor NP
22:17 dijuremo [root@ysmha02 dijuremo]# time ( find /home/miguel/.winprofile.V2 -type f | wc )
22:17 dijuremo 2140    2889  332485
22:18 dijuremo real 0m26.777s
22:18 dijuremo That is way too slow... 26 seconds for two thousand files...
22:19 dijuremo In the old standalone server without gluster, the same operation takes just 0.148s
22:20 dijuremo root@ysmbackups:~# time ( find /home/miguel/.winprofile.V2 -type f | wc )
22:20 dijuremo 2140    2889  332485
22:20 dijuremo real 0m0.148s
22:21 dijuremo I have only added two performance options but no improvement:
22:21 dijuremo performance.io-thread-count: 32
22:21 dijuremo performance.cache-size: 256MB
22:23 _maserati_ Wish I could help you, but I've never ran into this issue, and we have millions of files over samba
22:23 hchiramm_home joined #gluster
22:24 ctria joined #gluster
22:25 topshare joined #gluster
22:25 * Romeor would try to tune samba performance
22:25 dijuremo @_maserati_ do you mount your gluster partitions with direct-io=disabled?
22:26 _maserati_ Romeor, he said he's having the same latency issues when doing `ls` commands on the gluster clients themselves
22:27 _maserati_ dijuremo, no, direct-io is not disabled
22:27 Romeor oh, my bad.
22:27 dijuremo @_maserati_ any options set in the gluster volume to speed up reads of small files?
22:28 dijuremo I have 12 2TB SATA drives on an areca raid controller as the backend...
22:28 Romeor may be you should try to tune the underlying FS that u use for gluster volumes?
22:29 _maserati_ What FS are you on top of?
22:29 Romeor hah :D
22:29 _maserati_ And honestly, your problem could stem from that raid controller or even the slow disks
22:29 Romeor sata is not that slow for one way operations like reading.. so raid contra remains
22:30 _maserati_ You dont know how he has those disks setup =P
22:30 jdossey joined #gluster
22:30 dijuremo I set up the disks using all the "speed" recommendations
22:31 Romeor but the underlying FS counts also.. so what is it?
22:31 dijuremo Reading some RedHat docs...
22:31 dijuremo xfs
22:31 _maserati_ so it sounds like
22:31 Romeor xfs has a LOT of optimization options.. one can adapt it for any kind of files
22:32 _maserati_ your disks could be turning off after x amount of time of no activity. so when you first do an 'ls' you're disks all have to spin back up, and if you've went for "Speed" then the raid controller is probably striping many of them, therefore for reads you literally are waiting for 12 disks to spin back up
22:32 dijuremo Nope, disks do not spin down..
22:32 _maserati_ hmmm
22:32 Romeor _maserati_: has a point btw!! dijuremo you don't use WD green series, dont' you ?
22:32 dijuremo Nope
22:33 Romeor then try to tune XFS for small files
22:33 dijuremo Slot 01 HITACHI HUS723020ALS640          2000.4GB  Raid Set # 000
22:33 dijuremo All 12 drives are HITACHI
22:33 _maserati_ im about to leave work. if you jump on tommorrow i'll dig through my XFS and gluster configs and show you what i've done. We process millions of small files... so it may help
22:33 Romeor oh.. hitachi .. i luve them
22:34 _maserati_ I use all hitachi's in my home storage system (24 TB) :)
22:34 Romeor dijuremo: while there is no _maserati_ here, i would advice to try with ext4 :)))
22:34 dijuremo No way, I ain't formatting this thing again...
22:34 dijuremo Have 5+TB live data
22:35 Romeor it just works out-of-the-box... really. i had lots of problems with gluster and xfs meanwhile glusterfs recommends xfs
22:35 _maserati_ Yessir, it's beer-thirty. See you guys tommorrow, ima wrap things up here
22:35 dijuremo [root@ysmha01 sbin]# df -h | grep export
22:35 dijuremo 10.0.1.6:/export                                19T  5.7T   13T  31% /export
22:35 doctorra1 dijuremo: how about when you run the find command on one the gluster replicas itself?
22:35 Romeor cya
22:35 Romeor dijuremo: don't you have some free space for another volume?
22:36 dijuremo So if I run the command on the brick mount point, it is very fast...
22:36 dijuremo If I run it on the gluster mount point it is very slow
22:37 Romeor dijuremo: have you read this one ? https://github.com/purpleidea/puppet-glu​ster/blob/master/manifests/brick.pp#L261 there are recommended mount options for xfs from glusterfs.
22:37 dijuremo [root@ysmha01 sbin]# time ( find /bricks/hdds/brick/home/ltabares/.winprofile.V2 -type f | wc)
22:37 dijuremo 3699    9719  549098
22:37 doctorray I'm troubleshooting similar issues on my clients, in aws.  lots and lots of small files.  ls can take forever.  but mostly it only happens when one of the replicas is under load doing a self heal
22:37 _maserati_ dijuremo, that definitely rules out the controller, disks, and filesystem
22:37 dijuremo real 0m0.017s
22:37 dijuremo vs the actual gluster mount point /export
22:38 dijuremo [root@ysmha01 sbin]# time ( find /export/home/ltabares/.winprofile.V2 -type f | wc)
22:38 dijuremo 3699    9719  508409
22:38 dijuremo real 0m1.253s
22:38 dijuremo Well now it is cached, so it is faster, but still two orders of magnitude slower on gluster vs brick
22:39 dijuremo Original non-cached took 23.738s on that folder
22:39 Romeor add lots of ram and rise the cache to somethingreallybig 8)
22:39 dijuremo How can I dump xfs options for you to see...
22:39 Romeor just joking
22:40 purpleidea dijuremo: ack :) any updates or changes lmk, but those came from the perf tuning recommendations ben england did. cc Romeor
22:40 doctorray dijuremo: are your fuse client versions up to date?
22:41 dijuremo [root@ysmha01 sbin]# rpm -qa | grep fuse
22:41 dijuremo glusterfs-fuse-3.6.3-1.el7.x86_64
22:41 dijuremo fuse-2.9.2-5.el7.x86_64
22:41 dijuremo fuse-libs-2.9.2-5.el7.x86_64
22:41 dijuremo I am on CentOS 7 and have not upgraded from 3.6.x to 3.7
22:41 Romeor 3.6.4 is out
22:41 JoeJulian fuse client doesn't matter with samba vfs.
22:42 dijuremo Which is also why I think direct-io is also not going to help
22:42 dijuremo Cause I have samba using vfs
22:43 doctorray sorry, didn't see the vfs part.
22:44 Romeor just a bit about my problem.. base install for both scsi and sata drivers was fine. trying to install mate now
22:45 cleong joined #gluster
22:46 dijuremo From my setup notes:
22:46 dijuremo Raid6 stripe on 12 disks set to 128K
22:46 dijuremo [root@ysmha02 ~]# mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 /dev/sda1
22:46 nishanth joined #gluster
22:51 Romeor dijuremo: i really feel sorry for your trouble, but welcome to the club. I also have very specific problem, which now (after about 2 months of debugging) seems like not gluster related.
22:51 dijuremo Arggg cannot win...
22:52 dijuremo I had originally set these servers with drbd/btrfs until btrfs crapped out on me
22:52 Romeor yes you can. its the price of opensource.. getting a bit help from here, a bit of there.. meeting interesting people, getting new contacts ... after what you will make history :D
22:52 dijuremo So now I tried gluster, everything else seems fine, transferring files I get almost gigabit speeds (minus small overhead) via samba both reads and writes...
22:53 dijuremo And to make things more complicated, this server is actuall also doing ovirt with a self-hosted engine on top of gluster... :)
22:53 JoeJulian At least you didn't get stuck with drbd in split brain and having to spend weeks pulling stripes of different drives to try to find data that was lost.
22:53 dijuremo Fortunately never had split brain issues with drbd...
22:54 JoeJulian When it happens it's really bad.
22:54 * Romeor also said no to drbd after reading both gluster and drbd docs :)
22:54 JoeJulian Or at least it was when I used it last.
22:54 dijuremo If it had not been for stupid btrfs, I would still be happy running it...
22:54 Romeor JoeJulian: give me +v. just for fun
22:54 Romeor okese
22:54 Romeor please*
22:55 dijuremo I wanted to avoid the fragmentation of things into many file systems and wanted a single large filessytem...
22:55 JoeJulian @glusterbot voice Romeor
22:55 dijuremo Should had gone with ext4+drbd and 16TB...
22:55 Romeor he's sleeping :)
22:56 Romeor dijuremo: do not give up. for some reason i think that your trouble is not gluster related also.. _maserati_ said he's fine with it
22:56 JoeJulian Remember, it's the poor mechanic that blames his tools.
22:56 dijuremo I made sure to split things in this hardware...
22:56 Romeor weeehaa :D thnx. now i beat those nicks with _ in the beginning
22:57 dijuremo SSDs for gluster/ovirt are conneted to their own ARECA controller.
22:57 dijuremo HDDs for fileserver are connected to a separate ARECA controller
22:58 dijuremo So I have two areca controllers per server: Controller Name    : ARC-1882
22:58 * Romeor don't know if it is right, but thinks that ssd for network storage is a bit overkill.. sas 10k is best but well. who knows.
22:58 Romeor doesn't know *
22:59 dijuremo SSDs is for vm storage...
22:59 JoeJulian ssd as journal for xfs is nice.
22:59 dijuremo I have the hosted engine and windows server vms running on top of the SSDs
23:00 Romeor I've got ssd for VM-s also. still in the cases, have no time to take them out and place for local storage of VE hosts... really no difference between sas and ssd for vm.
23:00 Romeor over the network i mean
23:00 Romeor but ssd die faster
23:00 jdossey joined #gluster
23:01 Romeor but it does not mean that ssd are bad.. just a fact
23:02 dijuremo But if you have 10gbps you should be able to push the ssds
23:04 Romeor i've got 10g
23:05 Romeor the performance could be better only when 5 servers in the same time will read or write at full 1gps .. i can't realize such situation in my environment
23:06 Romeor i've analyzed the real "speeds" of servers, its aroud 4-12 MB /sec ...
23:07 Romeor which is 32-100 mbps only
23:07 dijuremo Is that on the vms?
23:07 Romeor my 10g link is used for 120 mbps constantly only with 40+ vms
23:07 Romeor yes, on vms
23:08 dijuremo My read speed out of the gluster file system are great for large files...
23:08 dijuremo [root@ysmha01 OS-ISO]# dd if=CentOS-7.0-1406-x86_64-DVD.iso of=/dev/null bs=1M
23:08 dijuremo 3956+0 records in
23:08 dijuremo 3956+0 records out
23:08 dijuremo 4148166656 bytes (4.1 GB) copied, 7.06392 s, 587 MB/s
23:08 dijuremo [root@ysmha01 OS-ISO]# mount |grep export
23:08 dijuremo 10.0.1.6:/export on /export type fuse.glusterfs (rw,relatime,user_id=0,group_id​=0,allow_other,max_read=131072)
23:08 dijuremo [root@ysmha01 OS-ISO]# pwd
23:09 dijuremo And that is on fuse...
23:09 topshare joined #gluster
23:09 dijuremo It is just the reading of small files that is killing me... :......(
23:11 Romeor root@d8-test:~# dd if=/dev/zero of=1000MB bs=5MB count=200 200+0 records in 200+0 records out 1000000000 bytes (1.0 GB) copied, 3.02932 s, 330 MB/s
23:12 Romeor sas
23:12 Romeor :)
23:12 Romeor by the way
23:12 dijuremo Mine were also the SAS hitachi drives
23:13 dijuremo But I just copied that file from samba over to the windows VM which has the SSDs as a backend and the speed was close to 300MB/s
23:13 jdossey joined #gluster
23:13 Romeor JoeJulian ndevos _maserati_ seems like proxmox qemu-kvm and d8 virtio drivers won't go friends. i'll try again tomorrow.. gonna sleep now.. but with SATA driver installation of d8 and mate was fine. the one with scsi driver is still installing, pretty slow :D
23:14 Romeor if it will be confirmed tomorrow, i will close the bugs and try to reach debian and proxmox somehow..
23:15 JoeJulian Might search lkml for bugs, too.
23:15 Romeor lkml ?
23:17 Romeor ok, the scsi now booted also with mate.
23:17 dijuremo This is on the SSDs, but i is not apple to apples comparisson
23:17 dijuremo [root@ysmha01 BACKUP]# dd if=/dev/zero of=1000MB bs=5MB count=200
23:17 dijuremo 200+0 records in
23:17 dijuremo 200+0 records out
23:17 dijuremo 1000000000 bytes (1.0 GB) copied, 2.53299 s, 395 MB/s
23:17 dijuremo [root@ysmha01 BACKUP]# dd if=1000MB of=/dev/null bs=5M
23:17 dijuremo 190+1 records in
23:17 dijuremo 190+1 records out
23:17 dijuremo 1000000000 bytes (1.0 GB) copied, 1.66001 s, 602 MB/s
23:17 topshare joined #gluster
23:17 dijuremo Because I have 8 SSDs doing raid 10
23:17 plarsen joined #gluster
23:17 dijuremo For VMS
23:18 Romeor and if you use /dev/urandom you will end up with 9-13 MB/s
23:18 Romeor same as sas
23:18 dijuremo Well /dev/urandom is so slow because it depends on the entropy of the system to generate output
23:19 Romeor i personally prefer to use SSD as real storage backed (for system boot, or for high loaded VMs as local storage) then you'll see the real difference
23:20 dijuremo Well, I bought 8 256GB Samsung SSD pros for probably less than 10K SAS drives cost...
23:21 Romeor dijuremo: yes, it is.. but i've used more complicated software to compete them, ended up with not that big difference over the net.. but again - it doesn't mean, that ssd are bad... i just prefer to change disk once in 5-6 years :D
23:21 dijuremo Guess not... the 300GB are ~$100
23:22 dijuremo Well, samsung ssd 850 pros are rated 300TB of writes or 10 year warranty...
23:23 Romeor disk prices is not the main here..your time and probability that 2-3 ssd disk will failure in the same time... when you buy disks, usually they come from same series... and guess what will first disk lifetime compared to another one form same series? right.. same.
23:24 Romeor i had such situations already and don't want to meet them again :)
23:24 dijuremo Same thing with spinning drives...
23:24 dijuremo Did you ever had to deal with Seagate ES.2?
23:24 Romeor yes, but spinning drives usually live far more further than project does
23:24 dijuremo And those were "enterprise" rated...
23:25 Romeor i've got a stack of 15k sas disks that still alive and worked as DB backend for pretty loaded project for 6 years.
23:25 Romeor never changed any
23:25 dijuremo Nice...
23:26 dijuremo I really wanted to make sure iops would not be a problem running the vms, so I went with ssds...
23:26 Romeor and then we started with new project and 2 of 5 ssd disks in raid5 failed after 8 months
23:27 dijuremo Which is why I have 8 ssds per server and doing raid 10...
23:27 Romeor loosing amount of data you could have in raid6
23:27 Romeor loosing amount of data you could have in raid5
23:27 dijuremo So I hope that holds...
23:28 dijuremo I do not need a lot of space, I need the better reliability, so it was raid 6 or raid 10...
23:28 dijuremo again I really wanted to make sure iops would not be an issue, so went raid10
23:28 Romeor and yes ssd are good for performance and iops. if everything other is covered with some solution, they are best. i'm just too old school. i want things just configured and working :)
23:29 dijuremo I also have a pair of DELL R720s with 14 1.2TB SAS 10K rpm drives...
23:29 dijuremo That is also running gluster, but only hast to meager VMs on it...
23:29 dijuremo One is an ssh server, the other a web server...
23:29 dijuremo to=two
23:31 necrogami joined #gluster
23:32 Romeor oh damn. another hing from my infrastructure.. backups are running and everything is pretty unusable :D have to go to sleep now
23:32 Romeor hing=hint
23:32 Romeor backups is only thing that maxes out 10gbps :D
23:34 gildub joined #gluster
23:35 Romeor can some please advice me some good irc clinet for linux
23:38 dijuremo @Romeor I have used pidgin for IRC in the past... works just fine...
23:40 Romanr joined #gluster
23:40 Romanr wee, now i'm here with xchat :)  hope some1 will give me +X
23:40 Romanr Romanr = Romeor
23:41 Romanr +v i meant
23:41 Romanr JoeJulian, could you please? I'm leaving with Romeor  nickname
23:42 lpabon joined #gluster
23:44 topshare joined #gluster
23:46 topshare_ joined #gluster
23:51 Romanr hm
23:52 * Romanr is away: Going awayyyyyyy...

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary