Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 Champi joined #gluster
00:41 plarsen joined #gluster
00:44 farhorizon joined #gluster
00:50 bluenemo joined #gluster
00:52 aravindavk joined #gluster
01:03 farhorizon joined #gluster
01:18 MrAbaddon joined #gluster
02:02 gospod2 joined #gluster
02:18 daMaestro joined #gluster
02:23 nbalacha joined #gluster
02:32 daMaestro joined #gluster
02:57 vishnu_kunda joined #gluster
02:59 ilbot3 joined #gluster
02:59 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:03 masber joined #gluster
03:07 farhorizon joined #gluster
03:14 vishnu_sampath joined #gluster
03:15 gyadav__ joined #gluster
03:22 jcall joined #gluster
03:34 masber joined #gluster
03:38 shyu joined #gluster
03:54 kramdoss_ joined #gluster
03:59 msvbhat joined #gluster
04:09 nbalacha joined #gluster
04:10 ^andrea^ joined #gluster
04:10 nishanth joined #gluster
04:12 jiffin joined #gluster
04:16 Humble joined #gluster
04:42 Saravanakmr joined #gluster
04:42 hmamtora joined #gluster
04:42 hmamtora_ joined #gluster
04:45 atinm joined #gluster
04:47 ppai joined #gluster
04:56 karthik_us joined #gluster
04:58 hgowtham joined #gluster
05:07 Shu6h3ndu joined #gluster
05:09 sanoj joined #gluster
05:13 rafi joined #gluster
05:18 ndarshan joined #gluster
05:19 apandey joined #gluster
05:23 kotreshhr joined #gluster
05:24 rafi joined #gluster
05:25 skumar joined #gluster
05:26 msvbhat joined #gluster
05:29 Prasad joined #gluster
05:36 rafi1 joined #gluster
05:37 sunnyk joined #gluster
05:39 jiffin joined #gluster
05:42 Prasad_ joined #gluster
05:44 poornima joined #gluster
05:45 Prasad__ joined #gluster
05:48 sanoj joined #gluster
05:55 map1541 joined #gluster
06:00 msvbhat joined #gluster
06:03 XpineX joined #gluster
06:04 vishnu_sampath joined #gluster
06:13 susant joined #gluster
06:14 kdhananjay joined #gluster
06:15 xavih joined #gluster
06:42 masuberu joined #gluster
06:50 mbukatov joined #gluster
06:53 jkroon joined #gluster
06:53 Humble joined #gluster
07:03 vishnu_kunda joined #gluster
07:15 sanoj joined #gluster
07:21 jtux joined #gluster
07:33 prasanth joined #gluster
07:39 sanoj joined #gluster
07:42 vishnu_sampath joined #gluster
07:51 shyu joined #gluster
07:54 XpineX joined #gluster
07:59 nishanth joined #gluster
08:00 nishanth joined #gluster
08:01 nishanth joined #gluster
08:04 nishanth joined #gluster
08:16 nbalacha joined #gluster
08:32 vishnu_kunda joined #gluster
08:38 ivan_rossi joined #gluster
08:43 rastar joined #gluster
08:46 _KaszpiR_ joined #gluster
09:02 fsimonce joined #gluster
09:06 ahino joined #gluster
09:07 vishnu_sampath joined #gluster
09:15 rastar joined #gluster
09:21 vishnu_kunda joined #gluster
09:25 Humble joined #gluster
09:28 vishnu_sampath joined #gluster
09:36 pester joined #gluster
09:37 Teraii_ joined #gluster
09:39 al_ joined #gluster
09:39 Bardack joined #gluster
09:40 yawkat joined #gluster
09:40 irated joined #gluster
09:40 [o__o] joined #gluster
09:42 john51 joined #gluster
09:42 crag joined #gluster
09:43 [diablo] joined #gluster
09:44 marlinc joined #gluster
09:44 tamalsaha[m] joined #gluster
09:48 toredl joined #gluster
09:49 kramdoss_ joined #gluster
09:50 ivan_rossi left #gluster
10:04 ndk- joined #gluster
10:05 jackhill1 joined #gluster
10:10 vishnu_sampath joined #gluster
10:10 arif-ali joined #gluster
10:10 ashka joined #gluster
10:10 ashka joined #gluster
10:14 decayofmind joined #gluster
10:15 squarebracket joined #gluster
10:22 smohan[m] joined #gluster
10:30 bitchecker joined #gluster
10:34 bitchecker joined #gluster
10:35 sanoj joined #gluster
10:48 Shu6h3ndu joined #gluster
10:59 MrAbaddon joined #gluster
11:03 Teraii__ joined #gluster
11:09 Teraii_ joined #gluster
11:15 rastar joined #gluster
11:17 rastar joined #gluster
11:19 decayofmind joined #gluster
11:19 vbellur joined #gluster
11:36 bitchecker joined #gluster
11:40 ThHirsch joined #gluster
11:51 vishnu_sampath joined #gluster
11:51 skoduri joined #gluster
11:54 bfoster joined #gluster
11:56 _KaszpiR_ joined #gluster
11:58 ahino joined #gluster
11:59 ompragash joined #gluster
12:14 Humble joined #gluster
12:30 smohan[m] joined #gluster
12:53 _KaszpiR_ joined #gluster
12:54 yosafbridge joined #gluster
13:00 map1541 joined #gluster
13:05 nbalacha joined #gluster
13:06 phlogistonjohn joined #gluster
13:09 tamalsaha[m] joined #gluster
13:09 marin[m] joined #gluster
13:20 shyam joined #gluster
13:26 jkroon joined #gluster
13:34 jackhill joined #gluster
13:38 jkroon joined #gluster
13:38 sunny joined #gluster
13:39 skoduri joined #gluster
13:54 sunny joined #gluster
13:55 shyam joined #gluster
13:57 plarsen joined #gluster
14:06 nishanth joined #gluster
14:19 boutcheee520 joined #gluster
14:33 rwheeler joined #gluster
14:42 Ulrar joined #gluster
14:47 07IAB1B09 joined #gluster
14:47 7GHABEGM3 joined #gluster
14:48 gyadav__ joined #gluster
14:53 phlogistonjohn joined #gluster
14:55 xoritor joined #gluster
14:56 shyam joined #gluster
14:57 kramdoss_ joined #gluster
15:00 msvbhat joined #gluster
15:19 MrAbaddon joined #gluster
15:23 shyam joined #gluster
15:23 xoritor does libgfapi use rdma for virtual machines?
15:24 xoritor i am testing using plain libvirt vms on a replicated volume with just transport rdma
15:28 kkeithley It should, if the VM has the rdma "hardware"
15:29 xoritor you mean if the vm HOST has the hardware right?
15:29 kkeithley if the HV has IB hardware then I presume the guests have access to it.
15:29 xoritor i have mellanox ib cards in the host and the host is doing the glusterfs with rdma
15:30 xoritor i think we are crossed
15:30 xoritor i am asking if the host glusterfs volume will use rdma for the transport of the qcow2 file
15:30 kkeithley you want the vms (guests) to use rdma to talk to gluster?
15:30 xoritor not about ib inside of the vm
15:31 xoritor kkeithley, nope... just the host to host glusterfs using rdma
15:31 xoritor vms dont need to do anything with rdma
15:32 xoritor the rdma docs are not to clear as to what will actually use rdma when dealing with libgfapi
15:32 xoritor http://docs.gluster.org/en/latest/Administrator%20Guide/RDMA%20Transport/
15:32 glusterbot Title: RDMA Transport - Gluster Docs (at docs.gluster.org)
15:32 alan113696 joined #gluster
15:32 kkeithley early on libgfapi had a bug where it only used tcp. that was fixed
15:33 xoritor so are the docs wrong?
15:33 kkeithley if the volume is created with rdma then gfapi will use rdma
15:33 xoritor sweet
15:33 kkeithley which docs?
15:33 xoritor the one i posted a link to
15:34 xoritor it says (just after the first paragraph) "NOTE: As of now only FUSE client and gNFS server would support RDMA transport."
15:34 kkeithley pretty sure the bug was fixed.
15:34 xoritor LOL
15:36 alan113696 i have some newbie questions - is this the right forum to ask?
15:37 xoritor alan113696, ask away... we may be able to provide some insight
15:37 xoritor alan113696, in my case it will probably be more or less snarky though ;-)
15:38 alan113696 i'm just experimenting with gluster 3.12.1
15:38 alan113696 on AWS EC2 instances
15:38 xoritor kkeithley, i would expect a bit better than 100 MB/s from rdma on ib hardware
15:39 alan113696 i don't need to use the snapshot ability, so I just formatted an entire EBS block device (/dev/xvdb) as XFS and mounted as a brick
15:39 bitchecker joined #gluster
15:40 xoritor alan113696, during format did you use -i size=512
15:40 xoritor ?
15:40 alan113696 i believe various docs say to use LVM thin pools, but is this needed if I don't care about snapshotting?
15:40 xoritor nope
15:40 alan113696 did NOT use size=512
15:40 alan113696 just took defaults
15:40 xoritor almost all of the docs say use -i size=512
15:40 alan113696 ok - I can add that
15:40 xoritor now... i could be wrong, and it may not matter anymore
15:41 xoritor but i still do it cause... well... im a creature of habbit
15:41 alan113696 i can check the defaults and add if needed
15:41 alan113696 seems to work with my setup and i'm seeing the expected performance with  using distribute and replication
15:42 alan113696 a different issue i'm having is reliable startup of glusterfsd (?) on boot
15:42 xoritor hmm... someone with more internals knowledge may need to answer that oen
15:43 alan113696 poking around the internet i see some posting about the same issue
15:43 xoritor what distro are you using?
15:43 xoritor its reliable for em, but then i dont use ec2
15:43 alan113696 centos 7 (latest) AMI
15:43 alan113696 seems to be the systemd unit file
15:43 xoritor did you do the systemctl enable glusterd
15:43 xoritor ?
15:43 alan113696 yup
15:44 alan113696 Before=network-online.target
15:44 xoritor do you have the ports open in the firewall?
15:44 alan113696 seems to be the issue
15:44 alan113696 yup
15:44 xoritor on centos7 bare meteal installs it comes up every time for me
15:44 xoritor maybe it is something with ec2?
15:44 alan113696 this ordering seems to be needed for self mounts on the box
15:45 alan113696 could be ec2 related
15:45 xoritor maybe add a retry always
15:45 xoritor or something
15:45 xoritor systemd has some ways of doing that
15:45 om2 joined #gluster
15:46 alan113696 work-around is to revise the systemd unit for glusterd to start gluster after network-online.target
15:46 alan113696 that seems to work
15:46 farhorizon joined #gluster
15:46 alan113696 and then if self mount of the volume is needed, then use automount
15:47 xoritor what about adding _net to the mountpoint in fstab?
15:47 alan113696 so I have a work-around, but just wondering if the team is working on a fix
15:48 alan113696 for some reason, the volume won't come up when Before=network-online.target is specified - I have to get rid of that and change to After in order for boot up to work
15:48 xoritor sorry its _netdev i guess
15:48 xoritor yea, i am pretty sure that is an ec2 issue... but i cant be 100% positive
15:48 alan113696 yea
15:49 alan113696 so work-arounds exist, but hoping that it's in the backlog for developers to look at
15:50 alan113696 interesting if this never happens on physical machines (vs. VM)
15:51 alan113696 switching topics...I'd like to co-locate gluster services on compute nodes in a large cluster
15:51 xoritor just cause i am curious and in testing mode
15:51 alan113696 looking for recommendations
15:51 xoritor i am rebooting some just to test
15:52 alan113696 compute nodes will run jobs that will peg all CPU's provided to the resource manager
15:52 alan113696 how much CPU does gluster need?
15:52 xoritor alan113696, that depends on a LOT of things
15:52 alan113696 i can deduct CPU's from the resource manager budget across nodes, but I need to know if gluster is cpu hungry
15:53 xoritor amount of data, clients, use case, etc... etc... etc...
15:53 alan113696 yeah - tough to answer
15:53 xoritor set it up and test it with some real world data
15:53 xoritor the closer you can get to what you are doing the better
15:54 prasanth joined #gluster
15:54 alan113696 i don't see gluster using more than a CPU when doing sequential writes across many clients
15:54 xoritor thats what i am in the process of doing right now
15:54 xoritor i have seen it peg multiple cores at 100% for long periods of time
15:54 xoritor or not use any
15:54 alan113696 really?
15:54 xoritor some of that depends on load, etc.. etc..
15:55 xoritor a rebalance or heal will use tons
15:57 alan113696 if deploying to 128 nodes, using distribute/replicate, do you have any recommendations due to the size of the gluster cluster?
15:57 xoritor so using dd to write zeros into a file 4GB in size over rdma hs gluster hit 70% or more
15:57 xoritor i have not dealt with one that big
15:57 xoritor maybe ask someone that has ;-)
15:57 xoritor heh
15:57 alan113696 where 70% is 70% of 1 x CPU? of 70% of all CPU?
15:58 alan113696 of = or
15:58 xoritor 70% of one core on 1 cpu
15:58 alan113696 yeah - that's what I see
15:58 xoritor but then there are times when it will really hammer it hard
15:58 xoritor mostly a heal or rebalance
15:58 alan113696 gives me hope that I can merely deduct 2 x CPU for gluster and resource manager agent, and give rest to compute
15:58 xoritor especially with lots of little files
15:59 xoritor doing a rebalance with about 100 GB of word doc files caused it to hit pretty high loads
15:59 xoritor at least in the past
16:00 xoritor that may be fixed now
16:00 farhoriz_ joined #gluster
16:00 alan113696 i probably need to test smaller files - perhaps i can tease out a higher cpu load
16:00 xoritor main thing i can say is test test test
16:00 alan113696 roger that!
16:00 xoritor heh
16:00 ic0n joined #gluster
16:01 xoritor i find that replicate works better for some things, and dist-repl for others
16:01 alan113696 is there a proper gluster shutdown and startup procedure?
16:01 xoritor i dont use much in the way of distribute as i need multiples for reliability
16:01 xoritor its pretty darn resiliant
16:02 alan113696 on AWS, I just stop all the instances at the same time
16:02 alan113696 then start them at the same time
16:02 alan113696 seems to be OK
16:02 alan113696 no master no right? pretty cool...
16:03 alan113696 no master node right?
16:03 xoritor yep
16:03 xoritor no no master
16:03 alan113696 love it!
16:03 xoritor yea, i keep trying others
16:03 xoritor but i keep coming back
16:04 xoritor KISS and easy get me every time
16:04 Gambit15 joined #gluster
16:04 alan113696 we had a lustre cluster go down on us and we lost most of our data - nasty
16:05 xoritor yea, and things do happen even with glusterfs... it is not perfect
16:06 xoritor the main thing is that even if your stuff goes down it is pretty easy to get the data back out
16:06 alan113696 any lessons learned to share?
16:06 xoritor keep backups ;-)
16:06 alan113696 yuk
16:06 xoritor always have more than one failover for quorum
16:06 xoritor ie 3 is ok... but 5 is better
16:07 alan113696 i haven't touched on that when reviewing the docs - can you explain quorum?
16:07 xoritor there are diminishing points of return on that at higher numbers
16:07 alan113696 arbiter vol?
16:08 xoritor ok... so every glusterd gets a vote if the majority agree you are in quorum
16:08 alan113696 split brain scenario?
16:08 xoritor split-brain is when you have an even number saying they are quorate
16:08 xoritor so even numbers are bad
16:08 xoritor odd numbers means someone gets a +1
16:09 kpease joined #gluster
16:09 xoritor fewer chances for split-brain
16:09 alan113696 does that mean in a replicate scenario, use an odd number?
16:10 cloph not just in replicate scenario, basically always :-)
16:10 xoritor yea... always
16:11 alan113696 ha ha - i was using 2
16:11 xoritor its not about how the data is spread (replicate, distribute, dist-repl, etc...) it is about how the glusterd server talks to other glusterd servers
16:11 cloph there are two levels of quorum - on the peer level (how many servers from the cluster are up), and on the volume level (how many bricks are up) - those don't necessarily be the same
16:11 xoritor cloph, thats a better answer
16:11 jbrooks joined #gluster
16:12 xoritor i need more [c]D (thats my acii coffee cup)
16:13 alan113696 trying to digest - so odd number of glusterd servers
16:13 xoritor odd is good
16:14 alan113696 i'll review the docs to make sure I understand that one
16:14 cloph yes, that will allow them to distinguish between network communication issue or just serverprocess down.
16:15 xoritor quorum is pretty universal... google it and there are TONS of docs at all different levels
16:15 cloph (otherwise all servers might be up, but just 2 can talk to each other, and two others can talk to each other, but no communication between the two sets - in this case both sets could think they were the "only survivors")
16:15 alan113696 ahhh...
16:15 alan113696 making more sense
16:16 xoritor cloph, i saw that when i had iptables blocking ports for the volumes on a few hosts
16:16 xoritor took me a while to figure out what was happening
16:16 xoritor glusterd could talk... but the volumes didnt see all of the bricks
16:17 alan113696 if split brain does happen, gluster will let you know during heal process right?
16:17 alan113696 i think i saw some command-line options
16:17 xoritor the docs have some info on both auto and manual split-brain healing
16:17 cloph yes - in case of split brain you need to tell gluster how to resolve the conflict (i.e. which file/metadata should win)
16:17 xoritor yep
16:18 alan113696 sweet
16:18 cloph newer versions of gluster allow you to specify a default policy (largest file wins, latest timestamp wins,...) so you don't have to manually pick it (but I also consider that telling gluster what to do)
16:18 alan113696 is that in 3.12.1?
16:18 xoritor that is very nice compared 5 years ago
16:18 xoritor ;-)
16:20 cloph not sure whether it already was available in 3.10, but 3.11/3.12 definitely have it.
16:20 alan113696 googling around - seems to be in 3.8 even
16:20 cloph and even manual split-brain healing is much easier than before (before you couldn't use management commands, but had to manually set extended attributes - scary stuff :-))
16:22 alan113696 cloph - do you have experience setting up glusterfs on a large number of nodes like 128?
16:22 alan113696 just wondering if you have any lessons learned to share
16:23 cloph Nope, only small scale, 6 servers in total only (but not all in the same cluster)
16:23 xoritor cloph, yea the manu xattr stuff sucked
16:23 cloph I think the base lesson is: don't go crazy with replication number :-)
16:24 alan113696 does replication number matter with respect to the quorum discussion?
16:24 xoritor i bet a dist-repl would work better than just a plain replicate
16:24 alan113696 i can set up an odd number of glusterd
16:24 alan113696 but i'd like to stick with replica = 2
16:24 timotheus1_ joined #gluster
16:25 ic0n joined #gluster
16:25 xoritor replica 2 over 128 nodes should be fine
16:25 cloph only indirectly - if you have more bricks holding the same data online, it is easier than if the bricks are supposed to hold different data to begin with.
16:26 cloph so in a distributed-replica it depends which bricks are online for the data to be in consistent/non-ambiguous state
16:27 cloph just consider whether you're fine with two servers having fatal issues and those make up the bricks holding the two copies of the data.
16:27 xoritor cloph, ok thats very true
16:27 alan113696 right - just two copies of the data
16:27 alan113696 i can accept the risk
16:28 xoritor at 128 nodes those copies could be on any nodes
16:28 xoritor if those nodes are not up that data is not there
16:28 xoritor alan113696, again... test test test
16:29 cloph I see no point in not using replica 3 or 2 with arbiter though. Gives another level of quorum/less likely to end up in split brain.
16:29 cloph with replica 2 it means that even if just one server goes down, the replica-set is not meeting quorum.
16:30 cloph so use replica 2 with arbiter or replica 3 would be my recommendation
16:30 alan113696 arbiter just stores metadata?
16:30 cloph yes, that's right.
16:30 xoritor with 128 nodes you should be able to do replica 3 or 5 with no issues
16:31 xoritor i dont know that i would really go over 5 though
16:31 alan113696 but you need one arbiter per replica-set
16:31 xoritor not unless you have to make sure the data is always available
16:32 xoritor alan113696, it lays out arbiters pretty much automatically ie.. replica 2 with arbiter gives you host1 host2 host3(arbiter) host4 host5 host6(arbiter)....
16:32 alan113696 ah, ok
16:32 xoritor so every third host is an arbiter
16:33 cloph s/host/brick/
16:33 alan113696 but this is better than replica=3 because it will consume less network bandwidth
16:33 glusterbot cloph: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
16:33 xoritor cloph, true
16:33 xoritor less space too
16:34 cloph remember that clients write to the replica at the same time, so if you mount a "replica 50", the poor client would have to write data to 50 bricks - that likely will overhelm the client's bandwidth :-)
16:35 ic0n joined #gluster
16:35 alan113696 we're investigating glusterfs as a performance improvement over a single node NFS
16:35 alan113696 don't want to ruin that due to replica=128
16:36 alan113696 cloph - didn't know that it's the client that has to write to multiple servers to implement replication
16:36 xoritor some things need to make sure you have the data so they need high replica counts, others don't... it all comes down to your use case
16:37 cloph not sure re scaling about so many hosts though, guess you should also experiment with using smaller number of peers (unless you need that for capacity of the volumes) - not sure when overhead of communicating between the peers has negative impact.
16:37 alan113696 xoritor - yup! i our use-case, we just need a cluster to live long enough to run a job, and then have the results transferred to S3. Then the cluster gets thrown away
16:38 cloph guess also worth looking into tiering / using "hot" and "cold" storage
16:38 alan113696 cloph - little worried about 128 nodes and the glusterd chatter
16:38 alan113696 we can always just have a separate cluster for gluster that is independent of compute
16:39 alan113696 just trying to save $$$
16:39 alan113696 by co-locating storage with compute
16:41 alan113696 cloph - so is it the client-side FUSE-based gluster driver (AFR?) that performs the replication by writing data in parallel to remote servers?
16:42 xoritor alan113696, you can sort of hack around that if you are worried about bandwitch from your location to the remote ec2
16:43 xoritor alan113696, or do some sort of geo-repl
16:44 saybeano joined #gluster
16:44 gyadav joined #gluster
16:45 xoritor what i have done in the past is to mount glusterfs via fuse on one remote node then push the files via rsync/scp/whatever into the fuse mounted directory and let that "replicate" them to all of the other nodes
16:46 cloph alan113696: don't quote me on this, but yes, I think it is the fuse-client that at least triggers the write, but I don't have much insight in the lower-level workings
16:54 alan113696 xoritor and cloph - you've been a HUGE help! appreciate all the recommendations you've provided!
16:54 xoritor alan113696, good luck
16:55 jiffin joined #gluster
16:58 msvbhat joined #gluster
17:02 ic0n joined #gluster
17:05 jbrooks joined #gluster
17:10 xoritor ok... so my testing of libgfapi using only rdma transport fails to work with libvirt to create VMs, it works with qemu-img create to make the disk image, but is not seen by libvirt.  after is switch the transport to tcp,rdma it works fine in libvirt
17:16 xoritor that is on centos 7.4 with glusterfs-server-3.12.1-2.el7.x86_64 via the centos-release-gluster312.noarch : Gluster 3.12 (Long Term Stable) packages from the CentOS Storage SIG repository
17:17 xoritor other than that repo enabled it is pure centos 7.4
17:19 ThHirsch joined #gluster
17:20 ziemowitp joined #gluster
17:21 ziemowitp can I run Gluster 3.8 and 3.10 in a single cluster?
17:23 _KaszpiR_ joined #gluster
17:30 ziemowitp I keep on getting peer rejected after probing from gluster 3.8 to 3.10.  I tried removing /var/lib/gluster but it didn't make a difference.
17:33 jiffin joined #gluster
17:44 bjuanico joined #gluster
17:44 xoritor ziemowitp, sorry i cant help you... i have no idea about that
17:45 bjuanico is there a way to downgrade to 3.10 from 3.12, i thought it was stable and updated to that version, and its no stable at all, the system is in production
17:45 xoritor inside of the vm on 4x4 replicated glusterfs using transport tcp,rdma on infiniband i am getting write of 400 MB/s and reads of about 560 MB/s
17:46 xoritor that is with cache=none
17:46 bjuanico can i downgrade one of the nodes to 3.10 and hook it back up with the 3.12 volume?
17:48 JoeJulian ziemowitp: you should be able to. Look in the glusterd logs to determine why the peering is failing.
17:48 msvbhat joined #gluster
17:48 JoeJulian bjuanico: Sure, just downgrade. There's no magic to it.
17:49 Humble joined #gluster
17:50 JoeJulian xoritor++
17:50 glusterbot JoeJulian: xoritor's karma is now 1
17:50 JoeJulian cloph++
17:50 glusterbot JoeJulian: cloph's karma is now 6
17:51 xoritor hi JoeJulian... been a while
17:52 JoeJulian xoritor: qemu-img and qemu both use the same libgfapi library so if it works for img, it shoud be able to work for qemu.
17:52 JoeJulian and... HI! :D
17:52 xoritor haha... yea that's what i thought too, but maybe i have something misconfigured
17:53 xoritor it works with decent throughput using tcp,rdma though
17:53 Shu6h3ndu joined #gluster
17:53 shyam joined #gluster
17:53 JoeJulian That suggests to me that it's not a bug in libgfapi, but rather something in qemu. Have you tried configuring qemu to put the client log somewhere and take a look?
17:53 xoritor not yet, that is on my todo list though
17:55 xoritor honestly i may not even bother other than to give some feedback here
17:55 jiffin joined #gluster
17:55 xoritor i am getting just about native performace for the drives i have
17:56 xoritor if things puke when i add the 5th server in i will really look into it
17:57 xoritor but for right now, near native disk performace inside the vm with it replicating on 4 hosts suits me fine
17:58 ic0n joined #gluster
18:00 xoritor now that may not be the case with multiple vms all doing read/write at the same time
18:03 jiffin joined #gluster
18:07 RantDimlyDo joined #gluster
18:12 om2 joined #gluster
18:21 msvbhat joined #gluster
18:22 WebertRLZ joined #gluster
18:23 jiffin joined #gluster
18:34 kpease joined #gluster
18:45 rastar joined #gluster
18:48 ThHirsch joined #gluster
18:52 ahino joined #gluster
19:04 xoritor this is somewhat as i expected for results with no caching and replica 4... when doing dd if=/dev/zero of=bigfile bs=4k count=1000000 conv=fdatasync inside of 2 vms at the same time on 2 of the 4 hosts the numbers drop to about 250 MB/s
19:05 xoritor thats actually better than i thought it would be
19:05 xoritor its close 400 MB/s with one host
19:05 xoritor and native for that disk is around 400 MB/s
19:06 xoritor so i can not complain about any of that
19:22 bitchecker joined #gluster
19:32 major joined #gluster
19:35 ziemowitp JoeJulian: it's failing because it can't find one of the friends which is 2nd node in the older cluster.
19:35 ziemowitp and then it says it's not local... whatever that means
19:37 msvbhat joined #gluster
19:51 ziemowitp hmmm, maybe one of the nodes keeps on rejecting the new ones
19:51 primehaxor joined #gluster
19:52 timotheu_ joined #gluster
19:58 ziemowitp it keeps on saying it's unable to find nodes in the cluster... how does gluster get the list of nodes?
20:02 ziemowitp i copied all the files from /var/lib/glusterd/peers to the new node and it went ahead but now it's stuck one something else.
20:38 om2 joined #gluster
20:57 farhorizon joined #gluster
21:25 farhorizon joined #gluster
21:29 JoeJulian @later tell ziemowitp If you copy /var/lib/glusterd/peers to a new node, you would have to ensure that node's self is not present in there.
21:29 glusterbot JoeJulian: The operation succeeded.
21:44 jkroon joined #gluster
21:46 arpu joined #gluster
21:52 cliluw joined #gluster
21:53 cliluw joined #gluster
22:07 joshin joined #gluster
22:09 ThHirsch joined #gluster
22:13 glisignoli When using 'gluster volume heal <volume> statistics' how can I clear statistics? Whenenever I try to get statistics now, the operation just times out
22:17 mcmx2 joined #gluster
22:18 mcmx2 Hi, i'm testing gluster, i've set up a replicated volume between 2 servers, mounted on both. When i reboot one server i can no longer access the volume on the server that is up, i'm not sure what i'm doing wrong
22:19 Teraii joined #gluster
22:20 jkroon joined #gluster
22:22 marlinc joined #gluster
22:34 Wizek_ joined #gluster
22:46 ic0n joined #gluster
22:54 vbellur joined #gluster
22:55 vbellur joined #gluster
22:56 vbellur joined #gluster
23:05 shyam joined #gluster
23:15 JoeJulian glisignoli: Have you tried restarting all the glusterd?
23:16 JoeJulian and mcmx2 is gone
23:21 vbellur joined #gluster
23:33 glisignoli JoeJulian: Err, I don't really want to restart my gluster daemons if I can help it
23:33 JoeJulian why?
23:33 glisignoli Won't that cause issues within my cluster?
23:34 JoeJulian No. Restarting glusterd does not restart glusterfsd (on purpose). glusterfsd provides the actual data server.
23:34 JoeJulian glusterd is just the management daemon.
23:41 joshin left #gluster
23:54 glisignoli Hmmm
23:56 glisignoli It does seem like a drastic way to clear some statistics

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary