Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 misc joined #gluster
00:07 amye joined #gluster
00:07 haomaiwa_ joined #gluster
00:12 jbrooks joined #gluster
00:13 misc joined #gluster
00:22 misc joined #gluster
00:26 DV joined #gluster
00:28 misc joined #gluster
00:35 jvandewege_ joined #gluster
00:36 ariowner joined #gluster
00:36 ariowner Hi all, can I configure network between server and client separatly actual data network and ping check?
00:45 gbox joined #gluster
00:55 misc joined #gluster
01:01 haomaiwa_ joined #gluster
01:02 misc joined #gluster
01:10 auzty joined #gluster
01:23 EinstCrazy joined #gluster
01:27 misc joined #gluster
01:33 misc joined #gluster
01:34 eagles0513875__ joined #gluster
01:34 eagles0513875__ hey all :)
01:34 eagles0513875__ has anyone used gluster on a linode vps?
01:59 harish joined #gluster
02:01 haomaiwa_ joined #gluster
02:01 misc joined #gluster
02:17 EinstCra_ joined #gluster
02:17 atalur joined #gluster
02:25 eagles0513875_ joined #gluster
02:26 timotheus1 joined #gluster
02:38 haomaiwa_ joined #gluster
02:40 PaulCuzner joined #gluster
02:45 baojg joined #gluster
02:46 cliluw joined #gluster
02:57 haomaiw__ joined #gluster
03:01 haomaiwang joined #gluster
03:01 amye joined #gluster
03:05 juhaj joined #gluster
03:13 MugginsM joined #gluster
03:22 Wizek_ joined #gluster
03:23 Wizek joined #gluster
03:26 overclk joined #gluster
03:35 sakshi joined #gluster
03:39 shubhendu joined #gluster
03:43 geniusoftime joined #gluster
03:44 geniusoftime hello. i have a working cluster, but I am unable to add an nth node. gluster peer probe handshake doesnt appear to complete correctly. I have this error: Unable to find peer by uuid: 00000000-0000-0000-0000-000000000000
03:44 geniusoftime network configuration is fine, and i've flushed ip tables on both sides
03:48 jbrooks joined #gluster
03:51 atinm joined #gluster
03:53 dka joined #gluster
03:53 dka I have a quick question
03:53 dka about gluster
03:54 dka I have 3 hosts with 2x2 To + 2x20 Go Hard drive. I want to use it with docker-glusterfs-driver, should I just create a 2To hard drive for /data ?N
03:55 Wizek joined #gluster
04:01 farblue joined #gluster
04:01 itisravi joined #gluster
04:01 haomaiwang joined #gluster
04:02 nbalacha joined #gluster
04:05 dka I would love to find some support to get gluster working
04:14 d0nn1e joined #gluster
04:15 MugginsM joined #gluster
04:21 eagles0513875_ dka: probably alot of people are asleep in here
04:25 gem joined #gluster
04:25 theron joined #gluster
04:27 dka eagles0513875_, I keep looking and posting only for the last few days
04:27 dka they are obviously in hibernation
04:27 eagles0513875_ dka: for now central europe its 630am here just about
04:27 eagles0513875_ so they should be up soon
04:28 poornimag joined #gluster
04:28 ggarg geniusoftime, could you send glusterd logs and mail this problem in gluster-users@gluster.org or gluser-devel@gluster.org we will look into it. also please give information about which gluster version you are running.
04:29 ggarg geniusoftime, we need logs from nth node and the node on which you are executing gluster peer probe command
04:29 eagles0513875_ ggarg: something to pick your brain would gluster be a good fit for a storage solution something equivalent to owncloud
04:30 geniusoftime hi ggarg. thanks for your reply. i will try to send those logs
04:30 eagles0513875_ also does gluster require one to format their disk's or is it something that you install and it sets up the necessary things to operate
04:30 ggarg eagles0513875_, ya obviously its a good fit.
04:30 eagles0513875_ ggarg: my idea is to stick with my virtual server provider
04:30 ggarg geniusoftime, sorry for late reply. as it was night in india so most of the people was sleeping
04:30 glusterbot ggarg: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
04:31 eagles0513875_ just trying to figure out what I need to do to get owncloud to speak to gluster or come up with my own solution.
04:31 eagles0513875_ ggarg: how well does gluster work in a vps environment?
04:34 nishanth joined #gluster
04:38 ggarg eagles0513875_, i need to further check what this owncloud is doing. but gluster is good fit for public as well as for private cloud.
04:38 eagles0513875_ ggarg: think something like dropbox etc
04:38 eagles0513875_ ggarg: http://owncloud.org
04:38 glusterbot Title: ownCloud.org (at owncloud.org)
04:38 eagles0513875_ they have two variations one enterprise version with a few more features than the community edition has
04:44 ppai joined #gluster
04:45 jiffin joined #gluster
04:48 dka I am reading this thread : http://innerdot.com/containers/developing-​a-stateful-application-on-mesos-and-docker , he created this github repository : https://github.com/jmspring/mesos-stateful-example, I am trying to deploy the database, in my log I have :   creating volume/mounting volume $volume, mounting volume $containerid on $destvolume/unmounting volume $volume/ unmounting volume $containerid from $destvolume
04:48 glusterbot Title: Developing A Stateful Application On Mesos And Docker - Jim Spring (at innerdot.com)
04:48 dka Why is my volume unmounting so fast?
04:49 MugginsM the docker volume or the gluster volume?
04:50 karan_ joined #gluster
04:55 dka MugginsM, ?
04:55 dka 2016/04/05 06:45:07 Creating volume ff55e7e59cfc5d0e5f6aba826e379b56​5fffc5549a14f753d50fc55454f41b5f
04:55 dka 2016/04/05 06:45:07 Mounting volume kopaxgroup-openvpn-data on /var/lib/docker-volumes/_glus​terfs/kopaxgroup-openvpn-data
04:55 dka 2016/04/05 06:45:07 Mounting volume ff55e7e59cfc5d0e5f6aba826e379b56​5fffc5549a14f753d50fc55454f41b5f on /var/lib/docker-volumes/_glusterfs/ff55e7e59cfc5d0​e5f6aba826e379b565fffc5549a14f753d50fc55454f41b5f
04:55 dka 2016/04/05 06:45:07
04:55 dka 2016/04/05 06:45:08 Unmounting volume kopaxgroup-openvpn-data from /var/lib/docker-volumes/_glus​terfs/kopaxgroup-openvpn-data
04:55 dka 2016/04/05 06:45:08 Unmounting volume ff55e7e59cfc5d0e5f6aba826e379b56​5fffc5549a14f753d50fc55454f41b5f from /var/lib/docker-volumes/_glusterfs/ff55e7e59cfc5d0​e5f6aba826e379b565fffc5549a14f753d50fc55454f41b5f
04:55 dka looks like this
04:55 dka and the application restart over & over
04:56 dka don't pay attention to the name, it's just the first volume i wanted to create, i am deploying the database from the github gluster tutorial link i gave you
04:56 MugginsM looks like the app in the container is exiting for some  reason?
04:56 MugginsM I'm not even remotely familiar with it
04:56 MugginsM so not sure I'm that useful :)
04:57 dka Yeap, the app is restarting for some reason. I was able to try gluster, I was happy
04:57 dka until i tried this drivers
04:57 dka I don't even know if it work for anyone, no docs online
04:57 dka that's why I hope I could talk to a gluster dev in here
04:58 dka or should I just pass my way on gluster ? Have you tried flocker ? do you know any other stateful app plugin for mesos ?
05:00 aspandey joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 MugginsM whatever you use will need some kind of storage and gluster's pretty good. Not really familiar with things beyond that
05:07 ndarshan joined #gluster
05:09 skoduri joined #gluster
05:09 dka panteras_1 | mesos-slave stderr | I0405 05:07:05.544971   189 slave.cpp:1294] Got assigned task postgresnode.3a23f901-faec-11e5-9a5f-02428c5faf42 for framework 240e72eb-3fda-4d00-b367-1f2d22ce9f3e-0000
05:09 dka panteras_1 | mesos-slave stderr | I0405 05:07:05.546255   189 gc.cpp:84] Unscheduling '/tmp/mesos/slaves/00bcf092-1699-4335-​aad5-00da922fee7d-S0/frameworks/240e72​eb-3fda-4d00-b367-1f2d22ce9f3e-0000' from gc
05:09 dka panteras_1 | mesos-slave stderr | I0405 05:07:05.546380   189 gc.cpp:84] Unscheduling '/tmp/mesos/meta/slaves/00bcf092-1699-4​335-aad5-00da922fee7d-S0/frameworks/240​e72eb-3fda-4d00-b367-1f2d22ce9f3e-0000' from gc
05:09 dka panteras_1 | I0405 05:07:05.546566   187 slave.cpp:1410] Launching task postgresnode.3a23f901-faec-11e5-9a5f-02428c5faf42 for framework 240e72eb-3fda-4d00-b367-1f2d22ce9f3e-0000
05:09 dka panteras_1 | mesos-slave stderr | I0405 05:07:05.547145   187 paths.cpp:436] Trying to chown '/tmp/mesos/slaves/00bcf092-1699-4335-aad5-​00da922fee7d-S0/frameworks/240e72eb-3fda-4d​00-b367-1f2d22ce9f3e-0000/executors/postgre​snode.3a23f901-faec-11e5-9a5f-02428c5faf42/​runs/6f83cdc9-b5a0-45a6-9fc8-7b032cc59a14' to user 'root'
05:09 karthik___ joined #gluster
05:09 dka panteras_1 | mesos-slave stderr | I0405 05:07:05.553169   187 slave.cpp:4999] Launching executor postgresnode.3a23f901-faec-11e5-9a5f-02428c5faf42 of framework 240e72eb-3fda-4d00-b367-1f2d22ce9f3e-0000 with resources cpus(*):0.1; mem(*):32 in work directory '/tmp/mesos/slaves/00bcf092-1699-4335-aad5​-00da922fee7d-S0/frameworks/240e72eb-3fda-​4d00-b367-1f2d22ce9f3e-0000/executors/post​gresnode.3a23f901-faec-11e5-9a5f-02428c5fa​f42/runs/6f83cdc9-b5a0-45a6-9fc8-7b032cc
05:09 dka 59a14'
05:09 dka panteras_1 | mesos-slave stderr | I0405 05:07:05.554107   187 slave.cpp:1628] Queuing task 'postgresnode.3a23f901-fae​c-11e5-9a5f-02428c5faf42' for executor 'postgresnode.3a23f901-fae​c-11e5-9a5f-02428c5faf42' of framework 240e72eb-3fda-4d00-b367-1f2d22ce9f3e-0000
05:09 dka panteras_1 | mesos-slave stderr | I0405 05:07:05.559828   185 docker.cpp:762] Starting container '6f83cdc9-b5a0-45a6-9fc8-7b032cc59a14' for task 'postgresnode.3a23f901-faec-11e5-9a5f-02428c5
05:09 dka I feel like the problem comes from gluster regarding this logs
05:09 scobanx_ joined #gluster
05:11 scobanx_ Hi, How can I increase the disperse heal speed? Currently it only uses 10MB/s per node.
05:12 jiffin1 joined #gluster
05:12 Manikandan joined #gluster
05:14 itisravi joined #gluster
05:15 jiffin joined #gluster
05:16 spalai joined #gluster
05:20 aravindavk joined #gluster
05:22 hchiramm joined #gluster
05:22 hchiramm_ joined #gluster
05:27 Bhaskarakiran joined #gluster
05:31 jiffin1 joined #gluster
05:32 prasanth joined #gluster
05:37 hgowtham joined #gluster
05:37 kdhananjay joined #gluster
05:38 EinstCra_ joined #gluster
05:40 gowtham joined #gluster
05:41 skoduri joined #gluster
05:45 jiffin1 joined #gluster
05:53 Saravanakmr joined #gluster
05:57 jiffin1 joined #gluster
05:58 rafi joined #gluster
05:59 skoduri joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 RameshN joined #gluster
06:01 jiffin1 joined #gluster
06:02 ashiq joined #gluster
06:07 anil_ joined #gluster
06:08 ggarg joined #gluster
06:08 jiffin1 joined #gluster
06:10 mhulsman joined #gluster
06:11 jiffin joined #gluster
06:15 jiffin joined #gluster
06:18 jiffin1 joined #gluster
06:18 anil_ joined #gluster
06:19 atinm joined #gluster
06:23 jiffin1 joined #gluster
06:23 rafi joined #gluster
06:27 rastar joined #gluster
06:28 jtux joined #gluster
06:31 jiffin1 joined #gluster
06:34 karnan joined #gluster
06:35 jiffin1 joined #gluster
06:38 jiffin joined #gluster
06:40 anil_ joined #gluster
06:41 kotreshhr joined #gluster
06:42 ashiq dka, hi
06:43 ashiq dka, I am not sure what is happening with the gluster volume-driver of docker, so the problem is gluster volume is getting unmounted ??
06:47 kshlm joined #gluster
06:51 [Enrico] joined #gluster
06:53 rafi joined #gluster
06:55 kshlm joined #gluster
06:55 dka ashiq, yes
06:55 DV joined #gluster
07:01 haomaiwa_ joined #gluster
07:06 atalur_ joined #gluster
07:07 ggarg joined #gluster
07:14 Lee1092 joined #gluster
07:15 jiffin joined #gluster
07:18 jri joined #gluster
07:22 rafi joined #gluster
07:23 anil_ joined #gluster
07:24 atinm joined #gluster
07:25 atinm joined #gluster
07:26 wnlx joined #gluster
07:32 fsimonce joined #gluster
07:34 ivan_rossi joined #gluster
07:44 DV joined #gluster
07:46 ctria joined #gluster
07:51 farblue_ joined #gluster
07:52 ryllise joined #gluster
07:53 ahino joined #gluster
07:54 anil joined #gluster
08:00 [diablo] joined #gluster
08:01 haomaiwa_ joined #gluster
08:09 shyam joined #gluster
08:19 harish joined #gluster
08:22 jiffin1 joined #gluster
08:27 rafi1 joined #gluster
08:27 maxadamo joined #gluster
08:28 maxadamo Hi!
08:28 glusterbot maxadamo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:29 maxadamo any update regarding the issue with debian wheezy repository that vanished about 1 week ago?
08:29 maxadamo namely, this became suddenly empty: http://download.gluster.org/pub​/gluster/glusterfs/3.5/LATEST/
08:29 glusterbot Title: Index of /pub/gluster/glusterfs/3.5/LATEST (at download.gluster.org)
08:29 jiffin1 joined #gluster
08:30 om2 joined #gluster
08:31 maxadamo sorry. to be more precise, this is missing all the files: http://download.gluster.org/pub/gl​uster/glusterfs/3.5/LATEST/Debian/
08:31 glusterbot Title: Index of /pub/gluster/glusterfs/3.5/LATEST/Debian (at download.gluster.org)
08:32 kenansulayman joined #gluster
08:33 Guest53161 joined #gluster
08:33 jiffin1 joined #gluster
08:34 anil joined #gluster
08:36 Slashman joined #gluster
08:38 jiffin joined #gluster
08:39 xiu /b 12
08:39 jlockwood joined #gluster
08:40 Guest53161 left #gluster
08:41 mhulsman1 joined #gluster
08:56 spalai1 joined #gluster
08:56 jiffin joined #gluster
08:59 jiffin1 joined #gluster
08:59 Rasathus joined #gluster
09:00 arcolife joined #gluster
09:00 johnny_b_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:03 karthik___ joined #gluster
09:04 johnny_b_ hey guys, I'm very new to gluster, actually, just yesterday created my first gluster. I have 3 node mirror, everything was working fine, today I've restarted two nodes, one of them is disconnected now. there is gluster00, gluster01 and gluster02. gluster00 and gluster01 shows that peer gluster 02 is disconnected, but gluster02 shows that all peers are connected. how is that possible? Thanks in advance
09:04 jiffin1 joined #gluster
09:12 jiffin joined #gluster
09:13 pur joined #gluster
09:16 jiffin1 joined #gluster
09:16 shubhendu joined #gluster
09:19 jiffin joined #gluster
09:21 jiffin1 joined #gluster
09:23 EinstCrazy joined #gluster
09:23 vmallika joined #gluster
09:24 jiffin joined #gluster
09:28 jiffin1 joined #gluster
09:34 atalur_ joined #gluster
09:35 rafi joined #gluster
09:39 beeradb_ joined #gluster
09:43 baojg joined #gluster
09:47 jlockwood joined #gluster
09:48 Ulrar I have a node that I have to replace. I used replace-brick to move all the data to the new node, then I powered off the old node, and tried to assign the old IP to the new server
09:48 Ulrar glusterfs won't start anymore when I do that
09:49 Ulrar Guess I should remove the old node from gluster first ?
09:51 Debloper joined #gluster
09:52 baojg joined #gluster
10:01 haomaiwa_ joined #gluster
10:02 Norky joined #gluster
10:04 beeradb_ joined #gluster
10:04 dvargek joined #gluster
10:05 dvargek Hello
10:05 glusterbot dvargek: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:06 dvargek Is there anyone having some experience in using gluster georeplication?
10:07 dvargek I need a way to reset an existing georeplication index. The manual just says 'You can enforce a full sync of the data by erasing the index and restarting GlusterFS Geo-replication.', but there is no description, how erasing the index is done.
10:09 kkplgkantor joined #gluster
10:14 EinstCrazy joined #gluster
10:15 farblue hi all :) Can someone recommend the best way to achieve the equiv. of a ‘RAID6’ setup where I can survive the loss of any 2 bricks in a volume?
10:15 robb_nl joined #gluster
10:19 vmallika joined #gluster
10:19 _nixpanic joined #gluster
10:19 _nixpanic joined #gluster
10:23 _nixpanic joined #gluster
10:24 _nixpanic joined #gluster
10:26 kkplgkantor left #gluster
10:34 rafi1 joined #gluster
10:35 Ulrar "State: Peer in Cluster (Disconnected)" --> Does that mean it's connected, or not ? The remote peer sees all peers as connected
10:41 _nixpanic joined #gluster
10:41 _nixpanic joined #gluster
10:41 hackman joined #gluster
10:41 farblue I think it means it is disconnected
10:41 farblue can you probe it from another peer?
10:42 Ulrar I did, it says success it's already a peer
10:43 Ulrar It's weird, the node says connected to all other peers, and the other peers are saying this weird disconnected but in cluster thing
10:43 Ulrar Guess it doesn't like to change IP
10:44 Ulrar Ha, rebooted the node and gluster won't start anymore .. great
10:46 jad_jay_away_ joined #gluster
10:47 atalur_ joined #gluster
10:49 farblue I had some issues with gluster failing to start the other day and found it helped to double-check and correct the extended attributes on the brick subfolder for my volume and also start glusterd in debug mode - which appreared to let it get past an issue in the local journal
10:49 karnan joined #gluster
10:49 farblue but whether that is your issue I don’t know
10:54 farblue if I have a replica-3 volume what happens if 2 of the nodes die? not split-brain, just offline?
10:57 vmallika joined #gluster
11:01 haomaiwa_ joined #gluster
11:05 overclk joined #gluster
11:06 anil joined #gluster
11:09 semajnz left #gluster
11:10 hgowtham REMINDER: Gluster Community Bug Triage meeting to take place in ~50 minutes
11:11 karnan joined #gluster
11:13 mhulsman joined #gluster
11:14 gem joined #gluster
11:19 jiffin joined #gluster
11:19 ira joined #gluster
11:26 klaxa joined #gluster
11:30 jiffin joined #gluster
11:36 jiffin1 joined #gluster
11:37 Hesulan joined #gluster
11:46 dvargek left #gluster
11:49 jiffin1 joined #gluster
11:56 R0ok_ joined #gluster
11:56 hgowtham REMINDER: Gluster Community Bug Triage meeting to take place in ~5 minutes
11:57 deniszh joined #gluster
12:00 jiffin joined #gluster
12:01 jlockwood joined #gluster
12:01 haomaiwa_ joined #gluster
12:06 Pupeno joined #gluster
12:06 Pupeno joined #gluster
12:12 gem joined #gluster
12:16 jiffin joined #gluster
12:19 rastar joined #gluster
12:20 EinstCrazy joined #gluster
12:34 goretoxo joined #gluster
12:40 shaunm joined #gluster
12:41 goretoxo left #gluster
12:42 goretoxo joined #gluster
12:51 luizcpg joined #gluster
12:52 julim joined #gluster
12:54 post-factum farblue: are you here?
12:54 vmallika joined #gluster
12:54 farblue hi :)
12:54 farblue only for a few mins before a meeting
12:54 johnmilton joined #gluster
12:54 post-factum hello. have you got some patches from Manikandan to address your marker issues?
12:56 farblue no, as we discussed yesterday, the marker issue was just log polution rather than an actual error and I’m not building from source - I’m using the Ubuntu PPA so the patches wouldn’t be of use to me
12:56 post-factum ah, okay
12:57 farblue but I decided there was not enough info out there about dispersed volumes so for now I’ve switched to a replica-3 setup
12:57 farblue although it means realistically only 1 node can fail rather than 2 before the volume is unavailable
12:59 post-factum farblue: i believe you can configure quorum to be less than 33%, but that looks and sounds weird
13:00 farblue I guess the most resilient is to have a replica-5
13:00 post-factum farblue: and ×5 amount of traffic
13:00 farblue quite
13:01 farblue a replica-3 with arbitor-2 would prob. be a nice thing to have
13:01 farblue I’m just trying to get to the equiv. of RAID6
13:01 haomaiwa_ joined #gluster
13:02 farblue as my cluster is only small the gluster servers are also general servers and I want to be able to take one out for upgrades and still have headroom for 1 failing while I’m doing that
13:02 farblue back in 30mins :)
13:05 post-factum kk
13:09 Guest89761 joined #gluster
13:09 arcolife joined #gluster
13:16 frakt_ joined #gluster
13:16 nixpanic_ joined #gluster
13:16 nixpanic_ joined #gluster
13:17 jotun_ joined #gluster
13:17 yawkat` joined #gluster
13:20 crashmag_ joined #gluster
13:20 ahino joined #gluster
13:20 worzieznc_ joined #gluster
13:20 sankarsh` joined #gluster
13:21 Pupeno joined #gluster
13:23 EinstCrazy joined #gluster
13:24 syadnom_ joined #gluster
13:24 hackman joined #gluster
13:24 kdhananjay joined #gluster
13:24 coredump joined #gluster
13:24 lord4163 joined #gluster
13:24 d4n13L joined #gluster
13:24 caitnop joined #gluster
13:24 sage joined #gluster
13:24 pocketprotector joined #gluster
13:24 virusuy joined #gluster
13:24 lh joined #gluster
13:25 al joined #gluster
13:26 farblue back
13:27 farblue anyhow, yes, as I said earlier, still trying to work out how to handle a RAID6 style setup :(
13:27 farblue but I’ve moved away from dispersed as it simply didn’t seem stable for me
13:27 anil joined #gluster
13:28 DV joined #gluster
13:28 gowtham joined #gluster
13:29 RameshN joined #gluster
13:29 vmallika joined #gluster
13:29 cholcombe joined #gluster
13:30 continum joined #gluster
13:30 continum Hi guy, is there any way to resolve missing gfids in gluster georeplication?
13:35 post-factum farblue: I've stuck to distributed-relicated (replica 2) with each brick to be raid1, and hope to convert it into replica 3 arbiter 1
13:36 farblue so how does r3-a1 work in different situations?
13:37 post-factum farblue: it should prevent split-brains in case of single node failure
13:37 farblue but it won’t cope with 2 of the 3 replica nodes being down?
13:38 post-factum farblue: afaik, the volume goes read-only in case of 2/3 failure
13:38 farblue yeah, that’s what i expected but not really what I wanted
13:38 farblue I mean, it’s good for safety
13:38 farblue but not for uptime
13:39 post-factum farblue: what you want, I guess, is multiple arbiters
13:39 post-factum but I have no idea if that is possible now
13:39 dlambrig_ joined #gluster
13:39 farblue well, I think I need a quorum of 5 but I don’t actually need 5 copies of the data so yeah, 2 arbiters would prob. do it
13:42 skoduri joined #gluster
13:43 post-factum farblue: cephfs is far more flexible in that way :(
13:44 mhulsman1 joined #gluster
13:44 farblue it also looks much more complex to setup and debug :(
13:45 post-factum farblue: it is quite simple after smoking the quickstart guide for several hours
13:45 farblue lol
13:46 post-factum farblue: the conception of metadata separation should come to gluster as well with 3.8 release
13:46 post-factum afaik
13:46 post-factum arbiter volume is the first possibility to somehow separate the metadata
13:46 farblue ah, ok. And the ETA for 3.8?
13:46 dlambrig_ joined #gluster
13:47 post-factum when it is ready
13:47 post-factum may be, this summer
13:47 farblue fair enough :)
13:48 jockek joined #gluster
13:49 farblue I might need to investigate ceph again then :(
13:49 farblue but gluster just seems so much ‘nicer'
13:49 bennyturns joined #gluster
13:50 post-factum yep, especially in terms of resisting to flood
13:50 post-factum i could easily make cephfs hang with simple rsync
13:50 post-factum and that is weeeiiird, and devs know that
13:51 farblue quite
13:51 post-factum should reinspect jewel release, btw. they declare fs to be stable there
13:51 scobanx joined #gluster
13:51 farblue I guess my requirements are not the traditional target audience for gluster
13:51 haomaiwa_ joined #gluster
13:52 farblue I’m not looking to build a huge commodity storage array
13:52 farblue I have pretty minimal needs in some regards - only maybe up to 500Gb
13:53 farblue but I need it to be available and resilient and performant because I’m using it to back Containers
13:53 nbalacha joined #gluster
13:53 farblue and those containers can move about between servers in the cluster
13:53 post-factum farblue: i ran lxc over ceph rbd
13:54 post-factum and now run kvm over ceph rbd
13:55 farblue I’m sure ceph would be able to cope :) but it has all these different moving parts and I’ve only got 5 servers in a single cluster on a single lan switch and (much like kubernetes for scheduling) it just feels like overkill
13:55 theron joined #gluster
13:56 farblue although, to be fair, this ‘little cluster’ did manage to have issues with the disperse volume I set up :(
13:56 post-factum farblue: the perfection of ceph is that it runs well on 1 node as well as on 500 nodes
13:56 post-factum i'm really confused thinking about gluster on 500 nodes
13:57 post-factum but now for shared file storage gluster is the only option
13:57 post-factum if you need to host system images or VM, go with ceph
13:57 post-factum or — better — mix two of them like we do
13:58 farblue I’m not hosting images, I’m hosting the persistant data needed by the Containers
13:58 post-factum those data are not shared, i guess
13:58 farblue the data isn’t shared because the container is only ever running in one place
13:58 farblue but as the container hops between servers it still needs access to its data
13:58 post-factum that is what i'm referring to as to "system images"
13:59 farblue to me a system image is what you would use to spin up a VM
13:59 farblue I’m talking about the data generated by the VM once it’s started
13:59 post-factum you do not even need to have cephfs, you may go with ceph rbd
13:59 farblue take for example if I spin up a container with MySQL inside it
13:59 post-factum you connect rbd to container to have it as block device, and create xfs on top of that device
14:00 farblue yeah, Docker has a volume plugin for that kind of thing
14:00 farblue but volume plugins and Nomad don’t currently behave well together
14:00 post-factum :(
14:00 farblue and I’m also unsure how you’d go about doing backup
14:01 post-factum backups are pretty painful with ceph, yeah :/
14:01 post-factum and less painful with gluster
14:01 haomaiwa_ joined #gluster
14:01 farblue as I’m migrating from non-containerised to containerised it just seems easier for me to mount the shared fs on all the hosts and then use host mounts in the docker containers
14:02 farblue after all, containers are not like VMs - there’s no penalty to mounting a host mountpoint inside the container
14:08 ahino joined #gluster
14:13 aravindavk joined #gluster
14:21 bennyturns joined #gluster
14:29 robb_nl joined #gluster
14:29 kpease joined #gluster
14:33 karnan joined #gluster
14:35 spalai joined #gluster
14:40 Lee1092 joined #gluster
14:42 kovshenin joined #gluster
14:43 skoduri joined #gluster
14:45 kpease_ joined #gluster
14:54 farhorizon joined #gluster
14:54 Bhaskarakiran joined #gluster
14:57 beeradb_ joined #gluster
15:01 spalai left #gluster
15:03 wushudoin joined #gluster
15:08 Gaurav_ joined #gluster
15:19 goretoxo joined #gluster
15:20 haomaiwa_ joined #gluster
15:35 amye joined #gluster
15:48 jlockwood joined #gluster
15:58 jlockwood joined #gluster
16:00 Rasathus_ joined #gluster
16:00 Bhaskarakiran joined #gluster
16:01 haomaiwa_ joined #gluster
16:06 theron joined #gluster
16:09 mhulsman joined #gluster
16:10 jlockwood joined #gluster
16:14 coredump joined #gluster
16:16 atinm joined #gluster
16:19 overclk joined #gluster
16:22 gem joined #gluster
16:26 theron joined #gluster
16:30 om2 joined #gluster
16:35 B21956 joined #gluster
16:42 Rasathus joined #gluster
16:58 atinm joined #gluster
17:01 haomaiwang joined #gluster
17:01 rafi joined #gluster
17:03 overclk joined #gluster
17:08 jri joined #gluster
17:18 shyam joined #gluster
17:21 pgreg joined #gluster
17:28 gnulnx How in the heck do I stop gluster on one peer in a volume?  I want to do some maintenance to this box.  'detach' doesn't seem like the right word.
17:33 karnan joined #gluster
17:40 ahino joined #gluster
17:42 robb_nl joined #gluster
17:46 post-factum gnulnx: you should stop corresponding services
17:48 theron joined #gluster
17:48 julim joined #gluster
17:49 gnulnx post-factum: I did do 'service glusterfs-server stop' and that does stop glusterd, but glusterfs and glusterfsd are still running.
17:49 gnulnx I guess essentially I want to down a brick, temporarially
17:49 hackman joined #gluster
17:53 nishanth joined #gluster
18:00 chirino_m joined #gluster
18:01 haomaiwa_ joined #gluster
18:04 gbox gnulnx:  Notice post-factum used the plural services.  I think that includes glusterd and glusterfsd.  Some of the docs simply have you kill those processes.
18:04 gbox gnulnx:  I agree it's confusing
18:09 kpease joined #gluster
18:10 shubhendu joined #gluster
18:16 gnulnx gbox: ah, got it.  Just a `kill` might do then.
18:16 jiffin joined #gluster
18:21 Manikandan joined #gluster
18:22 gbox Does anyone know what xattr changelogs mean?  My volume's trusted.vol-client-2 are a mix of 0x000000010000000100000000, 0x000000020000000100000000, and 0x000000030000000100000000.  Self-heal fails on these & I can't tell what's wrong.
18:29 jiffin joined #gluster
18:35 mpietersen joined #gluster
18:36 jiffin joined #gluster
18:39 steveeJ how can I get rid of a stuck volume? it seems to be in creation but it's not yet showing up in the volume info
18:46 theron_ joined #gluster
18:47 theron__ joined #gluster
19:01 d0nn1e joined #gluster
19:01 haomaiwa_ joined #gluster
19:03 skylar joined #gluster
19:07 farhorizon joined #gluster
19:07 jiffin1 joined #gluster
19:14 jiffin1 joined #gluster
19:28 ghenry joined #gluster
19:48 Wizek joined #gluster
20:01 haomaiwa_ joined #gluster
20:05 farhoriz_ joined #gluster
20:05 hagarth joined #gluster
20:16 gnulnx left #gluster
20:23 gbox What do changelog trusted.VOL-client-# xattr mean? I see 0x000000010000000100000000, 0x000000020000000100000000, and 0x000000030000000100000000.  Self-heal fails on these.
20:29 gbox I understand the docs (https://gluster.readthedocs.org/en/​latest/Troubleshooting/split-brain).  There is just not enough information there.  I'll try to read the code.  This happens often enough someone should know.
20:29 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
20:48 johnmilton joined #gluster
20:49 bwerthmann joined #gluster
20:53 BitByteNybble110 joined #gluster
21:01 haomaiwa_ joined #gluster
21:02 om2 joined #gluster
21:05 bluenemo joined #gluster
21:05 bwerthmann joined #gluster
21:06 bwerthmann joined #gluster
21:16 farhorizon joined #gluster
21:21 farhorizon joined #gluster
21:31 misc joined #gluster
21:33 bwerthmann joined #gluster
21:36 ahino1 joined #gluster
21:37 farhorizon joined #gluster
21:40 robb_nl joined #gluster
21:45 papamoose2 joined #gluster
21:47 joseki joined #gluster
21:47 farhorizon joined #gluster
21:47 joseki hey all. i was wondering if anyone could provide some quick assistance, having an issue with my prod glusterfs
21:48 joseki i changed the ip address and now cannot get the management service to start
21:48 joseki so, can't run peer probe, etc.
21:49 ira joined #gluster
21:49 msvbhat joined #gluster
21:49 abyss^ joined #gluster
21:52 joseki one host is up, and the peer probe says already in peer list. the second host won't connect - 0-management: Initialization of volume 'management' failed, review your volfile again
21:52 syadnom_ anyone using gluster as a primarily read-only data store over WAN links?
21:54 syadnom_ most clients would only read data.  uploads are a limiting factor so spreading the files over a few sites like a very very mild version of a torrent is the goal...
22:00 gbox joseki: DNS stayed the same?
22:01 gbox syadnom_: Have you looked at geo-replication?  Might work to have replica-2 volumes in several places, synced by geo-replication
22:01 haomaiwang joined #gluster
22:02 gbox syadnom_: depends on latency, design, clients, etc.
22:08 gbox AFR issues: file has trusted.afr.gv0-client-2=0​x000000020000000100000000
22:08 gbox Do an ls -l to initiate self-heal
22:08 gbox Then the file has trusted.afr.gv0-client-2=0​x000000030000000200000000
22:08 gbox Essentially that made it worse
22:10 gbox OK it's a big file, 1.5G, so maybe it takes awhile to heal. It now has trusted.afr.gv0-client-2=0​x000000030000000000000000
22:36 gbox Indeed eventually it resolved via self-heal with prodding using ls & stat
22:53 luizcpg joined #gluster
22:54 shyam joined #gluster
23:01 7GHAAO08O joined #gluster
23:06 shyam joined #gluster
23:09 mowntan joined #gluster
23:26 shyam joined #gluster
23:28 johnmilton joined #gluster
23:35 luizcpg joined #gluster
23:58 bwerthmann joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary