Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 paul981 exi
00:13 paul981 t
00:36 amye joined #gluster
00:53 rafi joined #gluster
00:56 shdeng joined #gluster
00:59 Alghost_ joined #gluster
01:35 Lee1092 joined #gluster
01:35 kotreshhr joined #gluster
01:35 kotreshhr left #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:59 haomaiwang joined #gluster
02:35 magrawal joined #gluster
02:45 lanning joined #gluster
02:57 siel joined #gluster
03:11 Gambit15 joined #gluster
03:22 raghug joined #gluster
03:40 nishanth joined #gluster
03:49 gem joined #gluster
04:01 arcolife joined #gluster
04:05 itisravi joined #gluster
04:07 shubhendu joined #gluster
04:09 Alghost joined #gluster
04:12 atinm joined #gluster
04:13 d0nn1e joined #gluster
04:18 poornimag joined #gluster
04:20 nehar joined #gluster
04:24 RameshN joined #gluster
04:29 aspandey joined #gluster
04:29 kotreshhr joined #gluster
04:36 kotreshhr joined #gluster
04:41 msvbhat joined #gluster
04:42 nbalacha joined #gluster
04:53 atrius joined #gluster
04:55 ramky joined #gluster
04:58 aravindavk joined #gluster
04:59 prasanth joined #gluster
05:00 kshlm joined #gluster
05:02 kotreshhr joined #gluster
05:02 ndarshan joined #gluster
05:04 tg2 joined #gluster
05:07 masuberu joined #gluster
05:07 masuberu Hi
05:07 glusterbot masuberu: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:07 masuberu would you recommend CentOS6.7 to run a gluster cluster on production?
05:08 nbalacha joined #gluster
05:08 masuberu the reason I am asking is because our deployment environment relies on rocks cluster which is not compatible with centos7 yet
05:08 jith_ hi all, i am very new to glusterfs, but i have read ceph doc and part of  glusterfs doc...  For adding nodes to cluster, in ceph initially we will set passwordless ssh so we can easily add the nodes to cluster.. but in the case of glusterfs, there is no such setup.. so just glusterfs peer probe node2 command will be enough to add a new node.how come it is possible.so anyone in the same...
05:08 jith_ ...network can add the node2 without node2's permission??? i m sorry if i am wrong
05:11 Saravanakmr joined #gluster
05:11 sakshi joined #gluster
05:17 karthik___ joined #gluster
05:18 amye joined #gluster
05:19 JoeJulian jith_: Only initially. Once you've established a trusted peer group, only a trusted peer can probe a new server into the group.
05:19 JoeJulian masuberu: Sure, plenty of people still use CentOS 6
05:19 rhqq no way. there's somebody answering the questions here :O
05:21 masuberu JoeJulian: thanks!
05:23 JoeJulian rhqq: Pfft.. I answer questions all day.
05:23 rhqq im here for last 15 hours or something like that and those were first answers here :P
05:23 JoeJulian Scrolling back, you just missed me. I went to a friends college graduation today.
05:24 JoeJulian But if you want me to apologize, just say so.
05:24 rhqq nah :)
05:24 rhqq i was just surprised :P
05:24 rhqq that channel felt dead to me, despite of 256 (what a nice number!) ppl in here
05:24 ppai joined #gluster
05:24 rhqq ...
05:25 nigelb a lot of us are still waking up...
05:26 rhqq anyways. i've still same question i asked earlier here. i have an issue that i cant troubleshoot. gluster takes all the cpu cycles, despite of no IO on underlying bricks
05:26 rhqq 60s stats: 0/0 r/w and 10k lookup and 10k readdirp fops
05:26 JoeJulian My guess would be the self-heal daemon.
05:27 rhqq status/heal info and so on say there were no heal operations at all
05:27 itisravi joined #gluster
05:28 kramdoss_ joined #gluster
05:28 jiffin joined #gluster
05:29 JoeJulian No clue then. Obviously something is walking the directory tree. If I was trying to identify what does that I would probably choose wireshark. It does know how to decode gluster rpcs and should tell you pretty quickly where that traffic is coming from.
05:29 masuberu joined #gluster
05:30 rhqq ah cool, i didnt know
05:30 rhqq i'll try that
05:30 rhqq any other hints?
05:30 JoeJulian I usually tell people to check the log files, but I doubt you'll see anything interesting in them.
05:31 JoeJulian Do it anyway. ;)
05:31 rhqq nothing unusual
05:31 rhqq vs the state that everything was working fine
05:31 JoeJulian Things are not working fine?
05:31 rhqq well, when cpu hits the roof
05:31 rhqq the clients using the storage are significantly slowing down
05:32 JoeJulian Ah, that's a different story. What version?
05:32 rhqq 3.6.1
05:32 JoeJulian egads
05:32 rhqq ?
05:32 Apeksha joined #gluster
05:33 JoeJulian 3.6.9 is the latest in the 3.6 series.
05:33 rhqq well. it was working fine, and storage is quite tricky thing to administer regardless
05:33 JoeJulian http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/
05:33 glusterbot Title: Index of /pub/gluster/glusterfs/3.6/LATEST (at download.gluster.org)
05:33 rhqq so if it was working fine (2 years uptime)
05:33 rhqq i assumed thats rather not an issue
05:33 JoeJulian I do wish there were release notes in that directory...
05:34 Manikandan joined #gluster
05:34 JoeJulian Unless you finally hit a bug that you had not yet encountered.
05:34 rhqq :)
05:34 rhqq the usage has not changed, it's quite static envornment
05:34 rhqq any hints before the upgrade?
05:36 rhqq do i need to reboot or simple restarting service is fine?
05:37 masuberu would recommend to use gluster if my app needs to read millions of small files (36KB)?
05:37 rhqq not really
05:38 masuberu I would be using SSD nvme
05:38 masuberu RAID 10
05:38 rhqq little more background
05:39 masuberu ok
05:39 JoeJulian rhqq: upgrade server, stop service, pkill glusterfsd, start service, wait for self-heals to finish, move on to next server.
05:39 rhqq JoeJulian: thx
05:39 JoeJulian Once your servers are done, upgrade the clients and remount.
05:39 masuberu we work on a DNA sequencing lab
05:39 masuberu the sequencing machines create the digital information of the DNA samples
05:39 rhqq background in terms of what are the servers where and where are the clients
05:39 rhqq :P
05:40 masuberu ok
05:40 masuberu sorry
05:40 JoeJulian rhqq: what filesystem is on your bricks?
05:40 rhqq ext
05:40 rhqq 4
05:40 masuberu servers are
05:41 JoeJulian masuberu: I know other DNA sequencing labs that use gluster. That's about all I know about how they've implemented it though.
05:41 masuberu xeon processor E5-2600 v3
05:41 masuberu 512GB RAM DDR4
05:41 rhqq local networking, no remote locations,
05:41 rhqq right?
05:41 jith_ JoeJulian: yes i got.. thanks
05:41 masuberu 10x SSD 2TB nvme drives
05:41 masuberu yeah
05:42 rhqq point is gluster (unlike nfs) takes another round trip for ensuring the coherency
05:42 masuberu all connected through a mellanox 100Gig network
05:42 JoeJulian If you're not relying on replication to provide fault tolerance, you won't need the extra self-heal check.
05:42 rhqq so every network latency has a big impact
05:42 rhqq right
05:43 rhqq i keep forgetting about this feature of gluster :)
05:43 skoduri joined #gluster
05:43 JoeJulian rhqq: Yeah, ext was what I was guessing you were using. The upgrade should make a huge difference.
05:44 JoeJulian Ok, I'm off to bed. Got to leave for the airport in 4 hours so I should probably try to get a little sleep.
05:44 rhqq masuberu: so as JoeJulian reminded me, what will be the storage used for
05:44 rhqq are you setting up gluster for replication or stripping
05:44 glusterbot rhqq: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
05:45 Humble joined #gluster
05:45 JoeJulian Hush, glusterbot.
05:45 masuberu replication
05:45 JoeJulian (regex fail)
05:45 masuberu i don't think it would be useful to stripe such small files
05:45 JoeJulian btw.. only 1 p in stripe. :P
05:45 JoeJulian stripping is a whole different industry. :D
05:46 rhqq masuberu: personally i'd rather go with drbd + floating ip + nfs
05:46 * JoeJulian shudders
05:46 masuberu oh
05:46 masuberu ok
05:46 masuberu I have never hear that
05:46 masuberu drbd
05:46 JoeJulian I wouldn't go with drbd unless it was throw-away data.
05:46 masuberu ok, i will have a look
05:46 rhqq why?
05:47 JoeJulian I've lost data to it more than once.
05:47 rhqq i've never lost data thanks to it
05:47 JoeJulian And I know a lot of other people that have as well.
05:47 gowtham joined #gluster
05:47 rhqq i've run that as a backend storage for a couple vm pools
05:47 JoeJulian That's actually how I found gluster all those years ago.
05:47 masuberu our data is quite critical we can't loose it
05:48 JoeJulian masuberu: yeah, loose dna can be bad. ;)
05:48 masuberu this cluster will basically process all those files on a pipeline which takes days
05:48 masuberu and it will create a file which we use for next pipeline
05:48 masuberu once we finish we can get rid of the small files
05:48 karnan joined #gluster
05:48 rhqq masuberu: no matter what you decide to use
05:48 rhqq i'd advice having a direct link between two hosts
05:49 rhqq so there's lowest latency possible
05:49 rhqq link dedicated for replication only
05:49 masuberu you mean a vlan for replication only?
05:49 rhqq i'd still stick to drbd for storage layer, but elimitating latency and any /active/ device between hosts
05:49 rhqq o
05:49 rhqq no
05:50 rhqq just a cable connecting directly ethernet ports of the servers
05:50 JoeJulian masuberu: lookup(), open(), write(), close()... The lookup is 2 network round trips, the open, write, and close are each one. Multiply out your latency.
05:50 rhqq separate network adapters
05:50 rhqq just for thart
05:50 rhqq that
05:50 masuberu ok
05:50 JoeJulian If you can keep files open, or if you are the software developers there are tricks you can use to avoid some of those calls.
05:51 masuberu ok
05:51 masuberu mmm im affraid im not in control of the development side
05:51 JoeJulian As for reducing latency, infiniband fabric using rdma.
05:52 JoeJulian skips context switches and has the lowest latency.
05:53 masuberu yeah... we use ethernet...
05:53 JoeJulian Ok, I've seriously got to go now. I'll probably be on tomorrow. I'm babysitting Wiwynn in the DC all day tomorrow.
05:53 masuberu ok i think we will need to use local storage
05:53 masuberu thanks
05:57 ninkotech__ joined #gluster
05:57 ninkotech_ joined #gluster
05:57 masuberu by the way my same machines dong my gluster cluster would be used for computing
05:57 masuberu don't know if that would be a benefit
05:58 rhqq ah
05:58 rhqq then gluster is fine
05:58 rhqq just make sure to have a dedicated direct link between hosts used for replication
05:59 satya4ever_ joined #gluster
06:00 masuberu what do you mean with replicated link?
06:00 masuberu I have 4 nics
06:00 masuberu on each server
06:00 masuberu 2 for bonding on a MLAG to the mellanox switches
06:00 rhqq in what configuration?
06:00 masuberu 2 x 25Gigs
06:00 rhqq ok
06:01 masuberu the other 2 are 10gigs
06:01 masuberu you mean using one of the 10gigs for replication?
06:01 rhqq ok, so you need to estimate the projected bandwidth use
06:01 masuberu yes
06:02 rhqq and then use 1 or more, to be just directly connected and bonded
06:02 rhqq you really really worry about latency
06:02 masuberu yes I know
06:03 masuberu I am moving out from Panasas
06:03 masuberu panasas  = panfs
06:03 ashiq joined #gluster
06:03 pur_ joined #gluster
06:04 masuberu regarding estimating the projected bandwidth
06:04 aspandey joined #gluster
06:04 masuberu I am not sure how to it
06:05 rhqq is your software going to write more than 1GB/s
06:05 masuberu do you have any website/book I can use as a reference?
06:05 rhqq (constantly)
06:05 masuberu mmm yes
06:06 masuberu well
06:06 rhqq that will be replicated.. using that link
06:06 masuberu it will copy all the files locally
06:06 masuberu and then it will start processing it
06:06 masuberu from local storage
06:07 rhqq not sure whether i understand
06:07 hgowtham joined #gluster
06:07 masuberu so the data would be living somewhere
06:08 masuberu and my comuting nodes will copy the files form a sequencing job to local storage and then start running the pipeling from local storage
06:08 rhqq at what point the interaction with gluster storage happens?
06:09 masuberu the sequending machiens would be writing the files to gluster storage
06:09 masuberu and the compute nodes would get the files from gluster to local storage
06:10 masuberu seq machine -- WRITE --> gluster <-- READ/WRITE --> compute node
06:10 glusterbot masuberu: <'s karma is now -21
06:10 rhqq where are the seq machines,
06:11 rhqq what are those :P
06:11 masuberu they are Windows 7 machines
06:11 masuberu using 1G link
06:11 masuberu the sequencing proces takes 3 days
06:11 rhqq ok, so
06:11 masuberu and creates 1.4Millions files
06:11 rhqq so basically
06:12 masuberu 70% of those files are smaller than 4KB
06:12 rhqq you want to sum up the writes
06:12 rhqq and see what order of magnitude you're going to reach
06:13 rhqq because that data will be replicated, hence all the writes are going to happen twice. first time - when whatever client is writting to one of the gluster nodes
06:13 masuberu yes
06:13 rhqq and secondly the same amount of data is going to be exchanged via replication link
06:14 rhqq this is why you want plain, uninterrupted link between host
06:14 rhqq gluster hosts
06:14 rhqq does it make sense now?
06:15 masuberu yes
06:16 masuberu so the replication needs to be isolated to a dedicated network
06:16 rhqq otherwise all the processes reading from gluster will be pretty much hung until the data they want to read are considered clean
06:16 rhqq not even a network, i'd rather say physical cable. you dont want even a switch inbetween
06:17 masuberu but I would have more than 2 machines
06:17 masuberu you mean like a ring network?
06:17 masuberu ring topoloogy network?
06:17 rhqq are you going to have more than 2 glusterfs server nodes?
06:17 masuberu yes
06:17 rhqq ah,
06:17 rhqq that was missing too :P
06:18 masuberu well I would like to discuss that aswell
06:18 masuberu my idea was to use all the nodes for gluster and for computing as the same time
06:19 masuberu not sure if that would make sense?
06:19 rhqq yeah
06:19 rhqq well
06:19 masuberu because I have a lot of data to store
06:19 masuberu and each node only have 10Tb on RAId 10
06:20 masuberu and I need like 100TB at least
06:20 rhqq thats slowly beyond my competence tbh. i've only used 2-node deployments
06:21 rhqq that were used remotely
06:21 rhqq i've met all the latency issues
06:21 rhqq and that's what i can complain about ;)
06:21 rhqq personally i tend to separate storage from computing nodes of any kind, but your solution might require different measures
06:22 masuberu ok
06:22 masuberu hum
06:23 atalur joined #gluster
06:25 jtux joined #gluster
06:28 anil joined #gluster
06:34 Gnomethrower joined #gluster
06:40 Manikandan_ joined #gluster
06:45 rafi joined #gluster
06:46 glafouille joined #gluster
06:47 rafi joined #gluster
06:55 rhqq left #gluster
06:55 ramky joined #gluster
06:57 Anarka joined #gluster
06:57 Manikandan joined #gluster
06:57 ashiq joined #gluster
07:09 hackman joined #gluster
07:09 jri joined #gluster
07:11 gem joined #gluster
07:18 DV joined #gluster
07:24 micw joined #gluster
07:24 micw hi
07:24 glusterbot micw: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:24 micw do i need to restart glusterd after i created /var/lib/glusterd/secure-access?
07:24 [Enrico] joined #gluster
07:25 micw and do i need ca/key/cert for TLS on the Management Path?
07:27 kshlm micw yes to both
07:32 gem_ joined #gluster
07:48 micke joined #gluster
08:06 Anarka hey! :) i have 4 servers, and plan to have them divied between 2 datacenters(really close and using atm 10Gb/s). I understand 4 servers arent the best for quorum, and considering theres ofc the chance that the fiber goes dark, leaving 2 hosts on each side. I would like it to be replicated or dist-replicated. Any tips on this config or should i just shoot for 3 or 5 servers ?
08:08 the-me joined #gluster
08:11 muneerse joined #gluster
08:17 Manikandan joined #gluster
08:20 cloph_away glusterbot: source
08:20 cloph_away glusterbot: list
08:21 cloph_away *: is glusterbot's plugin source public somewhere? the naked ping reminder specifically :-)
08:21 glusterbot cloph_away: My source is at https://github.com/ProgVal/Limnoria
08:21 glusterbot cloph_away: Admin, Alias, Anonymous, Bugzilla, Channel, ChannelStats, Conditional, Config, Dict, Factoids, Google, Herald, Karma, Later, MessageParser, Misc, Network, NickCapture, Note, Owner, Plugin, PluginDownloader, Reply, Seen, Services, String, Topic, Trigger, URL, User, Utilities, and Web
08:22 cloph_away oh, network latency \o/
08:23 nixpanic joined #gluster
08:23 nixpanic joined #gluster
08:23 bitchecker joined #gluster
08:23 v12aml joined #gluster
08:23 ndk_ joined #gluster
08:24 zoldar joined #gluster
08:25 muneerse2 joined #gluster
08:29 muneerse joined #gluster
08:32 Slashman joined #gluster
08:34 micw joined #gluster
08:34 micw does the management path uses the same tls CA as client-server tls?
08:34 lanning joined #gluster
08:36 bio___ joined #gluster
08:36 kotreshhr1 joined #gluster
08:36 d-fence__ joined #gluster
08:36 magrawal_ joined #gluster
08:37 robb_nl joined #gluster
08:37 monotek2 joined #gluster
08:40 natgeorg joined #gluster
08:40 dastar joined #gluster
08:40 rafi1 joined #gluster
08:40 zerick joined #gluster
08:41 atalur_ joined #gluster
08:42 the-me_ joined #gluster
08:43 Nebraskka joined #gluster
08:43 Iouns joined #gluster
08:44 harish_ joined #gluster
08:44 samppah joined #gluster
08:45 mdavidson joined #gluster
08:47 Pintomatic joined #gluster
08:47 lezo joined #gluster
08:59 itisravi joined #gluster
09:15 micw i have 3 machines with 4x4 tb disk each. i'd like to have 3x redundancy. how would i best setup the cluster?
09:16 micw is it a good idea not to have a raid and ise the machines as redundancy instead?
09:21 akay micw: JoeJulian once told me to RAID0 the 4 disks, then set replica 3 on the gluster volume
09:21 akay If you don't RAID0 then you'll be limited to 4TB file size unless you use sharding
09:25 sankarshan joined #gluster
09:28 micw how about Distributed Replicated Volume?
09:30 micw where can i find http://www.gluster.org/community/documentation/index.php/Gluster_3.2_Filesystem_Administration_Guide for 3.8?
09:37 cloph joined #gluster
09:42 micw can i have multiple bricks (and other stuff) on one partition?
09:42 micw will gluster re-balance according to the free space on the partition then?
09:47 micw what transport type should i use?
09:49 karthik___ joined #gluster
09:55 deniszh joined #gluster
10:00 micw Type: Distributed-Replicate
10:00 micw Status: Started
10:00 micw Number of Bricks: 4 x 3 = 12
10:00 micw seems to work
10:10 ivan_rossi akay: why? you are limited to the size of the brick IIRC, underlying raid should not matter for file sizes, just for performance
10:16 Manikandan joined #gluster
10:18 paul98 joined #gluster
10:19 arcolife joined #gluster
10:24 cloph Hi - trying my luck again: got a problem with geo-replication, gluster volume geo-replication fileshare slavehost::fileshare status shows the replication as status=Active (in History Crawl), but seems to be stuck that way for long time already (no noteworthy network activity either, another volume with same master->slave replication works OK between the two hosts). However trying to call pause (or config for that matter) on the volume only
10:24 cloph results in error, with peers that are not part of the geo-replication:
10:24 cloph # gluster volume geo-replication fileshare slavehost::fileshare pause
10:24 cloph Staging failed on peer-with-no-georep. Error: Geo-replication session between fileshare and slavehost::fileshare does not exist.
10:24 cloph Staging failed on another-wo-geo-rep. Error: Geo-replication session between fileshare and slavehost::fileshare does not exist.
10:24 cloph geo-replication command failed
10:24 cloph Any idea on what is wrong?
10:27 armyriad joined #gluster
10:27 TvL2386 joined #gluster
10:28 micw ivan_rossi, no, i'm not limited to the brick size
10:28 micw a brick is ~3.5TB
10:28 micw i have 12 bricks
10:28 micw 3x redundancy
10:28 micw ~14 TB usable
10:29 micw advantage: if a disk faults, i only loose the redundancy of this disk (not the wole 3 disks)
10:35 ira joined #gluster
10:35 micw i wrote a testfile (1 gb) to my volume - but df shows nothing
10:36 micw should df work with gluster?
10:37 aamerik joined #gluster
10:41 Ru57y joined #gluster
10:44 ivan_rossi micw: i am pretty onfident that max file size is limited to brick size aunless you shard. my point is that raid0 shoud not have anything to do with it. e.g you could do raid10 and it wil be ok
10:44 micw i see
10:47 atinm joined #gluster
10:50 ivan_rossi actually RH recommends raid10 or raid6 on the servers, w. raid-0 when a disk will fail you will lose a brick. which can be ok or not according to to your requirements...
10:51 micw it's ok, i have 3x redundancy over 3 servers
10:52 arcolife joined #gluster
10:59 karthik___ joined #gluster
11:04 ivan_rossi yes but with raid0 a disk fault will induce a volume repair once you replace it. i personally would go w.raid 10 or raid5 with your setup
11:04 ivan_rossi and will keep slleping when a disk will fail at night
11:04 micw i do since 3x redundancy
11:04 micw without raid
11:05 micw i still can sleep when 2 servers fail
11:05 ivan_rossi suit yourself. suits me too
11:05 micw ;)
11:19 ira joined #gluster
11:28 atinm joined #gluster
11:32 kenansulayman joined #gluster
11:36 Anarka joined #gluster
11:37 kovshenin joined #gluster
11:38 kovshenin joined #gluster
11:45 kovshenin joined #gluster
11:55 rwheeler joined #gluster
12:10 jtux joined #gluster
12:17 kenansulayman joined #gluster
12:19 Guest14030 joined #gluster
12:23 jtux joined #gluster
12:25 arcolife joined #gluster
12:32 glush joined #gluster
12:33 glush hello! is it solvable on glusterfs? bind(6, {sa_family=AF_LOCAL, sun_path="/var/lib/samba/msg/50"}, 110) = -1 EACCES (Permission denied)
12:48 nishanth joined #gluster
12:49 arif-ali joined #gluster
12:50 glush joined #gluster
12:52 chirino joined #gluster
12:54 glush_ joined #gluster
12:58 B21956 joined #gluster
13:01 DV joined #gluster
13:01 glush it looks like not a gluster problem
13:04 ivan_rossi left #gluster
13:09 shubhendu joined #gluster
13:10 glush_ left #gluster
13:10 glush left #gluster
13:12 ben453 joined #gluster
13:17 Che-Anarch joined #gluster
13:25 shyam joined #gluster
13:27 shyam left #gluster
13:29 haomaiwang joined #gluster
13:31 ramky joined #gluster
13:32 fsimonce joined #gluster
13:32 shyam joined #gluster
13:44 nbalacha joined #gluster
13:51 arif-ali joined #gluster
13:59 harish_ joined #gluster
13:59 karnan joined #gluster
14:01 nishanth joined #gluster
14:02 shubhendu joined #gluster
14:25 ben453 joined #gluster
14:26 Gnomethrower joined #gluster
14:32 nehar joined #gluster
14:33 guhcampos joined #gluster
14:34 Humble joined #gluster
14:35 social joined #gluster
14:38 squizzi joined #gluster
14:40 social joined #gluster
14:43 pur_ joined #gluster
14:57 DV joined #gluster
15:16 DV joined #gluster
15:17 kpease joined #gluster
15:18 kramdoss_ joined #gluster
15:19 harish_ joined #gluster
15:21 hackman joined #gluster
15:31 siel joined #gluster
15:31 siel joined #gluster
15:31 arcolife joined #gluster
15:54 ic0n_ joined #gluster
15:55 kotreshhr joined #gluster
15:55 pousley joined #gluster
15:58 ic0n joined #gluster
16:11 poornimag joined #gluster
16:15 tertiary joined #gluster
16:18 tertiary i already checked the man page so i assume this is a no, but just to make sure: there is no way to run `gluster volume rebalance $vol start` so that it stays running until it completes?
16:26 Gambit15 joined #gluster
16:39 harish joined #gluster
16:43 Philambdo joined #gluster
16:44 nehar joined #gluster
16:47 Apeksha joined #gluster
16:56 shaunm joined #gluster
17:00 nehar joined #gluster
17:01 jiffin joined #gluster
17:07 DV joined #gluster
17:11 ben453 joined #gluster
17:12 amye joined #gluster
17:21 skylar joined #gluster
17:24 jiffin joined #gluster
17:34 plarsen joined #gluster
17:50 squizzi_ joined #gluster
18:00 Slashman joined #gluster
18:07 amye joined #gluster
18:20 Sue joined #gluster
18:23 Sue am i overthinking this? three machines, two drives each, how do i tolerate one machine dying while not using replica and the total space available being the size of a single drive?
18:29 Sue or would disperse 4 redundancy 2 survive a machine dying
18:43 cliluw Sue: It sounds like what you're asking for is mathematically impossible. Since you're limited to a single drive, you can only have 1 copy of the data.
18:47 Sue i worded that weird
18:48 Sue i mean, in replicated, the space available would be the total of one drive, as in raid 1
18:53 cliluw Sue: So you have 3 times as much space as you have data, right?
18:56 Sue yes
18:57 Sue well, i don't have a defined amount of data
18:57 Sue i'm going for ovirt+gluster
18:58 Sue but i want to distribute across each node while being able to tolerate one node going down (two drives going away)
19:00 cliluw cliluw: Since you don't want to use a replicated volume, it sounds like you want "disperse 1 redundancy 2" volume.
19:02 Sue jbod, disperse 6 redundancy 2?
19:05 Sue all the documentation assumes i have six servers, can two bricks reside on each of the three?
19:06 Sue "GlusterFS will fail to create a dispersed volume if more than one brick of a disperse set is present on the same peer." seems not
19:11 Sue bad formatting on readthedocs, says i can force it. but how is that not optimal?
19:16 Philambdo joined #gluster
19:21 Acinonyx joined #gluster
19:30 rhqq joined #gluster
19:30 rhqq JoeJulian: just imagine it got worse :)
19:43 rhqq guys, i've no traffic ( less than 500kb/s), i've no iops on brick volume, glusterfs is 3.6.9, and cpu usage is 100% - caused by the glusterfsd processes
19:43 rhqq i'm out of ideas how to troubleshoot that
19:49 robb_nl joined #gluster
20:06 rhqq left #gluster
20:11 DV joined #gluster
20:41 ashiq joined #gluster
20:43 deniszh joined #gluster
20:58 arif-ali joined #gluster
21:12 arif-ali joined #gluster
21:25 pousley1 joined #gluster
21:33 harish joined #gluster
21:34 shaunm joined #gluster
21:41 hackman joined #gluster
22:07 DV joined #gluster
22:47 harish joined #gluster
22:55 hackman joined #gluster
23:14 hackman joined #gluster
23:40 hackman joined #gluster
23:49 abyss^ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary