Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 daMaestro joined #gluster
00:35 omie888777 joined #gluster
01:01 atrius joined #gluster
01:14 baber joined #gluster
01:24 gyadav joined #gluster
01:28 shyu joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod3 joined #gluster
02:39 Freaken joined #gluster
03:27 ronrib joined #gluster
03:50 Guest9038 joined #gluster
03:51 Shu6h3ndu joined #gluster
03:51 itisravi joined #gluster
03:54 gyadav joined #gluster
04:16 dominicpg joined #gluster
04:24 apandey joined #gluster
04:28 karthik_us joined #gluster
04:35 atinm joined #gluster
04:37 poornima joined #gluster
04:44 susant joined #gluster
04:44 fiyawerx joined #gluster
04:47 Humble joined #gluster
04:48 skumar joined #gluster
04:52 itisravi joined #gluster
04:56 riyas joined #gluster
05:24 kdhananjay joined #gluster
05:31 buvanesh_kumar joined #gluster
05:31 omie888777 joined #gluster
05:32 atrius joined #gluster
05:34 rafi joined #gluster
05:38 aravindavk joined #gluster
05:47 kotreshhr joined #gluster
05:54 Prasad joined #gluster
05:56 Humble joined #gluster
05:58 sanoj joined #gluster
06:12 apandey joined #gluster
06:14 hgowtham joined #gluster
06:22 msvbhat joined #gluster
06:28 knishida joined #gluster
06:29 knishida joined #gluster
06:35 skumar_ joined #gluster
06:52 _KaszpiR_ joined #gluster
07:01 sanoj joined #gluster
07:09 shiywang joined #gluster
07:09 shiywang .all hi, I'm a newbie, and I would like to contribute some code to glusterFS for personal interest and file system learning, anyone knows how to find some issues which are good for new contributor ?
07:14 aravindavk joined #gluster
07:24 skumar__ joined #gluster
07:27 kotreshhr joined #gluster
07:28 kotreshhr left #gluster
07:31 ivan_rossi joined #gluster
07:36 Teraii joined #gluster
07:37 shyu joined #gluster
07:39 fsimonce joined #gluster
07:43 omie888777 joined #gluster
07:45 msvbhat joined #gluster
08:06 mohan joined #gluster
08:06 kotreshhr joined #gluster
08:13 madwizard joined #gluster
08:24 weller hi again ;-) are there recommended settings for a replicated gluster volume and samba sharing? i really have performance issues, when copying directories with many small files onto my share. thanks in advance!
08:25 shiywang joined #gluster
08:30 kdhananjay joined #gluster
08:33 Prasad joined #gluster
08:44 jkroon joined #gluster
08:44 karthik_us joined #gluster
08:48 h4rry joined #gluster
08:50 kotreshhr joined #gluster
08:50 kdhananjay joined #gluster
08:54 weller Monitoring traffic in wireshark shows that every file creation has 3 'Create Request File:' packets. The first two fail. The create request is followed by many gluster  LOOKUP packets, depending on the size of the directory this takes quite some time. any Idea how to accellerate that? Is this a gluster or a samba problem?
08:55 buvanesh_kumar joined #gluster
09:01 ThHirsch joined #gluster
09:09 jkroon joined #gluster
09:17 _KaszpiR_ joined #gluster
09:19 Larsen_ joined #gluster
09:20 msvbhat joined #gluster
09:22 aardbolreiziger joined #gluster
09:26 kotreshhr joined #gluster
09:29 Guest9038 joined #gluster
09:29 gunix cloph_away: i closed the vms last night and today i booted them back. now it's not mirroring the data ... it worked yesterday
09:29 cloph_away check connectivity/firewall rules, show some basic probing you did (volume status, peer status as very minimum)
09:32 gunix firewall is turned off, probing looks ok. i can paste to pastebin
09:33 gunix cloph_away: nevermind i rebooted again and it's working now
09:44 Klas weller: many small files and gluster are unfortunately basically rubbish
09:46 weller klas: direct copy via scp is still fast. 2k files with 10KB each are transfered in 2 seconds onto a gluster volume... would not call that rubbish ;-)
09:47 Klas do you copy that to a mount point?
09:47 weller yes
09:48 Klas that sounds strangely fast
09:48 [diablo] joined #gluster
09:48 weller let me double check
09:48 Klas =P
09:50 weller 2.8 seconds
09:50 _KaszpiR_ joined #gluster
09:51 Klas well, I'm impressed then =)
09:51 weller that's why I am so desperate
09:51 weller ;-)
09:51 [diablo] joined #gluster
09:52 weller my impression is, when people talk about 'many small files' they mean hundrets of thousands of these ;-)
09:52 weller not just 2000...
09:55 [fre] [diablo] Hi.
09:57 [diablo] :)
09:57 [diablo] fweddy
10:14 MrAbaddon joined #gluster
10:19 ThHirsch joined #gluster
10:33 ivan_rossi left #gluster
10:45 Klas weller: I've seen people with a couple of houndred files have issues
10:45 Klas but they had high latency between nodes
10:45 weller have you seen people with solutions? ;-)
10:45 Klas decreasing latency =P
10:46 Klas not sure how samba-client works, but FUSE writes to each server
10:46 weller I dont have high latency, network is a 10Gbit
10:46 Klas hmm, otoh, can samba client write to each server?
10:47 weller yes
10:47 Klas the one you are testing with
10:47 Klas ok, then I'm out of ideas, hopefully someone more knowledgeable can help you =)
10:48 weller thanks anyways
10:49 weller to me it seems that it is a not so fashionable problem... :(
10:50 shyu joined #gluster
11:09 gunix haha [diablo] nice name
11:10 caitnop joined #gluster
11:14 [diablo] cheers
11:14 [diablo] had it years
11:14 [diablo] like 16
11:16 karthik_us joined #gluster
11:39 aravindavk joined #gluster
11:41 hgowtham joined #gluster
12:03 weller is there a scenario where it could make sense to have a brick on a ramdisk? thinking of 'superfast' incoming data, that is then later distributed to the normal disks somehow...
12:04 weller would gluster support this?
12:07 decayofmind joined #gluster
12:12 shyu joined #gluster
12:24 msvbhat joined #gluster
12:25 Klas that sounds like tiered storage, unless I'm very much mistaken, that is not a feature in glusterfs
12:30 Freaken joined #gluster
12:37 sanoj joined #gluster
12:39 buvanesh_kumar_ joined #gluster
12:44 rwheeler joined #gluster
12:54 baber joined #gluster
12:54 ton31337 left #gluster
13:00 kdhananjay joined #gluster
13:06 anthony25 joined #gluster
13:11 Guest9038 joined #gluster
13:13 plarsen joined #gluster
13:15 _KaszpiR_ joined #gluster
13:27 ingard joined #gluster
13:30 bit4man joined #gluster
13:31 Peppard joined #gluster
13:33 zerick joined #gluster
13:40 ingard hi. I'm seeing quite high load avg on my glusterfs 3.10 servers. loadavg around 30 when each server is receiving maybe on avg 500mbits of data
13:41 ingard the only difference here is that the servers are 9ms away from the clients (fuse mounted)
13:41 ingard the difference being observed behaviour on other gluster clusters
13:44 ThHirsch joined #gluster
13:46 Guest9038 joined #gluster
13:50 gnulnx left #gluster
13:52 major joined #gluster
13:58 nbalacha_ joined #gluster
14:00 kotreshhr left #gluster
14:01 Wayke91 joined #gluster
14:04 Wayke91 joined #gluster
14:07 kotreshhr joined #gluster
14:10 aravindavk joined #gluster
14:18 aravindavk joined #gluster
14:20 susant joined #gluster
14:24 Wayke91_ joined #gluster
14:28 jbrooks joined #gluster
14:31 kotreshhr left #gluster
14:35 JoeJulian [02:53] <[fre]> seems I had to open "auth.allow " with my additional server
14:35 JoeJulian [02:53] <[fre]> Does it make sense for a volume that is already running on 2 nodes?
14:36 JoeJulian Yes, of course. You have to allow all your servers and clients access.
14:36 [fre] you don't have to when initially creating a normal replicated volume
14:37 JoeJulian Most people don't have an auth.allow set initially.
14:37 [fre] plus, it hardly makes sense to me, your peer node has been joined in the cluster.
14:38 [fre] so, it is "by design" authorised and aknowledged to access the volume.
14:38 JoeJulian So, when you "create volume" there are no auth limits. When you add an auth limit, you add it to the *volume*. The peering happens at the management level.
14:39 JoeJulian It was originally designed for trusted networks. Even the auth.allow is a bit of a hack, imho.
14:39 [fre] true... no auth-limits exist at creation-time...
14:39 JoeJulian For truly untrusted environments, I would use ssl.
14:39 JoeJulian especially with the encryption being moved to the kernel!
14:40 JoeJulian Have you seen the new numbers on that? It's pretty cool.
14:41 JoeJulian As an aside, I'm not calling you out I just wanted the discussion public so it can help others who might google this info later.
14:41 JoeJulian Because I assure you, someone else is going to. :)
14:42 [fre] yeah, tnx. But it's hard to filter out this info sometimes.
14:45 gyadav joined #gluster
14:53 farhorizon joined #gluster
14:55 mohan joined #gluster
14:57 baber joined #gluster
15:02 [fre] Haven't seen the ssl-stuff-performances... we're currently already suffering from too many delays due to major amounts of small files
15:03 [fre] hence my tests to move in new bricks in volumes, trying to phase out slower setups.
15:05 jstrunk joined #gluster
15:06 [fre] just a silly question... If I change a replica 2 to 3 with an additional disk, then how safe is it to remove the old disk and change it back to replica 2? Can I just use the remove-brick command?
15:08 farhorizon joined #gluster
15:09 mbrandeis joined #gluster
15:09 farhoriz_ joined #gluster
15:12 kpease joined #gluster
15:18 wushudoin joined #gluster
15:20 aravindavk joined #gluster
15:24 Wayke91 joined #gluster
15:36 shaunm joined #gluster
16:05 Guest9038 joined #gluster
16:09 dominicpg joined #gluster
16:18 baber joined #gluster
16:29 sac` joined #gluster
16:30 mbrandeis joined #gluster
16:35 rdanter joined #gluster
16:55 gunix weller: Klas: how come is tiered storage not supported? i can find it within the gluster man page, also, there is geo-replication. can't this be used with ramdisks on the 1st machine?
16:57 gunix cloph: i added the 3rd node to the cluster and it's replicating ok to all 3. they are all added with localhost in /etc/fstab and they seem to boot fine. i am using ubuntu 17.04 with ceph from repos (not additional repos installed). if i will encounter issues in the future (which i expect), i will let you know (if you are interested)
16:57 gunix *with GLUSTER from repos, not ceph. well, that was an awkward typo. i am sorry
16:59 vbellur joined #gluster
17:03 weller gunix: do you gain performance by that?
17:04 weller yeah, I would love to hear about that
17:04 arpu_ joined #gluster
17:04 weller *would love _not_ to hear about, actually ;-)
17:05 farhorizon joined #gluster
17:06 gunix weller: i am noob in gluster, i saw the discussion and i thought someone should dig deeper into this and run some tests. i am curious if this would actually work
17:06 gunix i mean, "nope it doesn't work" is not something i usually like to see, without people actually testint that stuff out :D
17:07 weller same for me ;-) I did not look into it any further then the first google hit (https://sites.google.com/site/tfsidc/ramdisk-cluster)
17:07 weller having plenty of ram that is not used (yet?) this could be useful
17:09 baber joined #gluster
17:12 msvbhat joined #gluster
17:26 shruti joined #gluster
17:35 DoubleJ joined #gluster
17:38 rafi1 joined #gluster
17:39 farhorizon joined #gluster
17:47 Wayke91 hey all, I'm trying to change a volume from replica 2 to replica 3 arbiter 1.  I keep getting the same error and I've rebuilt it as vms several times and on different hardware.  I'm pretty sure I'm doing something wrong as I've done tihs twice before without issue.  When I run the comand: "gluster volume add-brick <volume> replica 3 arbiter 1 <bric
17:47 Wayke91 ks>  I get the errors: internet address '?test-arbiter1' does not conform to standards\ volume add-brick: failed: Pre-validation failed on localhost. Please check log file for details
17:48 Wayke91 In the log I get the following: [glusterd-utils.c:1159:glusterd_brickinfo_new_from_brick] 0-management: Failed to convert hostname ?test-arbiter1 to uuid, and I think that is where it begins to fail.
17:48 Wayke91 Any ideas?
17:56 shruti joined #gluster
18:06 rafi joined #gluster
18:07 rdanter joined #gluster
18:16 baber joined #gluster
18:20 kpease joined #gluster
18:20 vbellur joined #gluster
18:22 marin[m] joined #gluster
18:22 sac joined #gluster
18:22 marin[m] hi guys, noob question here
18:23 marin[m] i have a bunch of servres with relatively small drives
18:23 marin[m] and i need one fault-tolerant large volume
18:23 marin[m] what should i go for? replicated + distributed, replicated + stripped, replicated + distributed + stripped ?
18:24 marin[m] and will these modes work just like a simple replicated volume?
18:26 marin[m] the volume will be used for backup, so resiliency is more important than performance
18:26 marin[m] planning to use a replication factor of at least 3
18:28 shruti joined #gluster
18:28 Wayke91 Are you going to RAID the drives on each server and present just a single brick for each node?
18:28 marin[m] yes
18:28 marin[m] 1 brick per node
18:28 Wayke91 how many nodes do you want to use?
18:28 marin[m] something like 30
18:30 kpease joined #gluster
18:30 Wayke91 I feel like then you'd be looking at a distributed, replicated volume since resiliency is your big factor
18:31 marin[m] so stripping is just for performance
18:31 marin[m] and distributed means that files themselves will not be split into chunks
18:31 marin[m] right?
18:31 Wayke91 right, and as far as I know there isn't any replication for a striped volume
18:32 Wayke91 distributed gets the file in its entireity on the brick, and it will load balance files between the bricks
18:32 marin[m] and it means that 1 single file cannot be bigger then the size of 1 brick
18:32 Wayke91 there's a short article in the docs that does a pretty good job explaining it, and the distributed replicated is very easy to setup: https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Architecture/
18:32 glusterbot Title: Architecture - Gluster Docs (at gluster.readthedocs.io)
18:33 prasanth_ joined #gluster
18:34 Wayke91 Yeah that's my understanding, so if you have a massive files then I'm not 100% on how to deal with that.  All ov my bricks are massive so it's not been an issue for me
18:35 marin[m] the bricks will be somewhere around 400G
18:36 marin[m] so this should not be an issue
18:36 marin[m] but i will notify the users of the said limitation
18:37 marin[m] also, i'm imagining that if i have large files, the space usage might not be the same across all the bricks
18:37 marin[m] do you know if there's any balancing based on file size and available brick space?
18:53 Wayke91 I believe there is, it tries it's best to balance the used space between the bricks
18:54 baber joined #gluster
18:56 marin[m] ok
18:56 marin[m] thank you very myuch Wayke91 , you pointed me in the right direction
18:57 Wayke91 glad to help!
18:57 marin[m] i got the general idea, now i will go read the fine manual and deploy the stuff
19:00 _KaszpiR_ joined #gluster
19:22 baber joined #gluster
19:41 msvbhat joined #gluster
20:02 MrAbaddon joined #gluster
20:05 primehaxor joined #gluster
20:10 Larsen_ joined #gluster
20:30 siel joined #gluster
20:43 marc_ joined #gluster
20:44 marc_ Hi there, I got a problem with gluster replication on newly added bricks: Old files do not appear in new bricks.
20:46 fury joined #gluster
20:46 Wayke91 how long has it been since you added the bricks?
20:46 marin[m] from what i know they will be replicated on the first access
20:46 marin[m] or at least that's the way it was in older versions
20:47 marc_ Hours - and there are only 3 text files
20:47 marc_ marc@universum:~$ ls /var/volumes/
20:47 marc_ test2.txt  test3.txt
20:47 Wayke91 did you change replica at all?  if not, then old files likely wont show up with that few files
20:48 marc_ test2.txt and test3.txt have been created after universum was added, test.txt was there beforeand is missing. Done about 4 hours ago.
20:48 marc_ Now I have 4 servers, and I added the last with:
20:48 marc_ sudo gluster volume add-brick volumes replica 4 raum:/var/gluster/volumes foce
20:49 Wayke91 since you didnt change the replica for the volume, existing files wont move to the new bricks
20:49 marc_ (force of course)
20:49 marin[m] mount the gluster volume somewhere ad do a ls -la test.txt
20:49 marc_ replica was 3 before, chaned to 4 when adding the new brick
20:49 marin[m] and see if that replicates it
20:49 marc_ ? yes
20:50 marc_ But then the response of ls is wrong!
20:50 marc_ marc@universum:~$ ls /var/volumes/test.txt
20:50 fury hi guys
20:50 marc_ /var/volumes/test.txt
20:50 Wayke91 hmm, anything look interesting in gluster volume heal <volume> info?
20:50 fury how's it hangin?
20:51 marc_ yes:
20:51 marc_ marc@universum:~$ sudo gluster volume heal volumes info
20:51 marc_ [sudo] Passwort für marc:
20:51 fury i just set up a fresh debian 9.1 server and the first thing i've done to it is try to install gluster, however the service fails to start: https://pastebin.com/jrizpFrV
20:51 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:51 marc_ Brick pulsar:/var/gluster/volumes
20:51 marc_ Status: Der Socket ist nicht verbunden
20:51 marin[m] did you write that file through the mounted gluster volume, or directly on the first server, on the filesystem of the brick?
20:51 marc_ Brick urknall:/var/gluster/volumes
20:51 marc_ /
20:51 marc_ /test2.txt
20:51 marc_ /test3.txt
20:51 marc_ Number of entries: 3
20:51 marc_ Brick universum:/var/gluster/volumes
20:51 fury oh
20:51 marc_ /
20:51 marc_ /test2.txt
20:51 marc_ /test3.txt
20:51 marc_ Number of entries: 3
20:51 marc_ Brick raum:/var/gluster/volumes
20:51 marc_ /
20:51 marc_ /test3.txt
20:51 marc_ Number of entries: 2
20:51 marc_ @paste
20:51 glusterbot marc_: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:52 fury https://paste.fedoraproject.org/paste/SNbIMh0mF6U7EqZ6ShPvOQ
20:52 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
20:52 gunix Wayke91: did you fix your issue?
20:52 Wayke91 I did not
20:52 gunix weller: nice site, if you fix that, please tell me
20:53 Wayke91 marc_: gluster volume status, do all your bricks show online?
20:53 gunix Wayke91: can you use some sort of pastebin to show the behaviour?
20:53 marc_ here: @paste https://paste.fedoraproject.org/paste/sbqRcFFgy84zeGT9dVKD5Q
20:53 glusterbot Title: sudo gluster volume heal volumes info - Modern Paste (at paste.fedoraproject.org)
20:54 marc_ The newest added has 2 instead of 3 entries
20:54 marin[m] how dod you initially writhe the first file
20:54 marin[m] test.txt
20:54 marc_ tee
20:55 marin[m] through the fuse mounted volume, or directly to the filesystem used by the brick?
20:55 marc_ echo "somethng" | tee test.txt
20:55 marc_ fuse mounted
20:55 marin[m] and if you access that file now, through the fuse mounted filesystem, it still won't replicate to the others?
20:55 marc_ yes, it's somehow missng overall
20:56 marc_ yes, whe I explicitely access it, it replicates
20:56 marc_ as you see in the paste: test2.txt is also missing on raum
20:57 Wayke91 Here's mine: https://pastebin.com/gWJcJHXC
20:57 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:57 marin[m] that was the expected behavior in earlier versions, but i understand that now it has auto heal that should do this automatically
20:57 marc_ [It is intendes as replicated volumes for docker, but for this, it must be trustworthy]
20:58 marc_ which versions. marin[m]?
20:58 Wayke91 bot had me use a different pastebin: https://paste.fedoraproject.org/paste/LMSX6LXc~ketej~iv3MtVQ
20:58 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
20:59 gnulnx joined #gluster
21:00 marin[m] marc_: something like 3.2 or 3.3, very old stuff
21:00 marc_ I'm using Ubuntu 16.04, glusterfs 3.7.6 built on Dec 25 2015 20:50:46
21:00 marc_ not the very latest, but later
21:00 gunix Wayke91: did you peer probe test-arbiter1 ?
21:00 gnulnx my `gluster volume status` command shows both peers (bricks).  It shows the second peer's PID, but for the TCP port it shows N/A.  The second peer definitely has a listening TCP port for the gluster process.  What could cause this?  https://gist.github.com/kylejohnson/b47e0926a8d184b519eeb0fe917e2b3e
21:00 glusterbot Title: gist:b47e0926a8d184b519eeb0fe917e2b3e · GitHub (at gist.github.com)
21:01 marc_ Would you consider glusterfs as production-ready?
21:01 marin[m] depends on what is needed
21:01 marin[m] i use 3.6 in production
21:01 marin[m] but for .. modest workloads
21:02 marin[m] and in a "static" configuration
21:02 Wayke91 yes, it shows connected on all nodes
21:02 marc_ I want something like a raid over network :)
21:02 marin[m] where i don't add/remove bricks
21:02 marc_ Well, I only add bricks, if a new server joins the docker swarm cluster. :)
21:02 gunix Wayke91: firewall shut down ?
21:02 marc_ no firewalls
21:02 marc_ open intranet
21:02 marin[m] yeah, not sure i would use it for dynamic stuff like that
21:03 Wayke91 gunix: yes, I've tried with firewalld off
21:03 gunix Wayke91: did you try to add the bricks one by one to backups? not all 4 at once?
21:03 marc_ marim[m], why not? What are the possible problems?
21:03 marin[m] that old files are not replicated right away :)
21:03 Wayke91 gunix: volime add-brick: failed: Operation failed
21:04 gunix Wayke91: added test-arbiter1 to /etc/hosts?
21:04 Wayke91 yes, I've tried with host entries, DNS, and by IP
21:04 gunix Wayke91: i am out of ideas.
21:05 Wayke91 gunix: me too lol
21:06 marin[m] Wayke91: there is  a strange character before test-arbiter-1 in your paste
21:06 marin[m] try re-writing that line without copy-pasting
21:06 marin[m] i see a red dot, maybe an invisible character on the command line
21:06 marin[m] https://paste.fedoraproject.org/paste/LMSX6LXc~ketej~iv3MtVQ
21:06 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
21:07 marin[m] a red dot before test-arbiter-1
21:07 Wayke91 hmmm, I have written the steps down in an internal doc and I have been copying and pasting from it
21:07 gunix i see the red dot now too, but i didn't see it on pastebin
21:07 Wayke91 definately possible something screwy is getting in there, I admit that I didn't ever fully type that line out
21:08 marin[m] try to retype the second brick
21:08 marin[m] arbiter1:/exp/brick2/back
21:08 marin[m] .. you get the idea
21:08 gunix Wayke91: i was about to suggest creating a forum post with everything (ip addr, nc host port, iptables -L, gluster peer status, all info possible) ... but if there is a strange character there...
21:09 marin[m] yeah, i am 90% sure that's your problem :)
21:09 gunix marin[m]: i am wondering why i don't see the dot. i am on archlinux. maybe i miss some fonts
21:09 marin[m] i will screenshot :)
21:09 gunix marin[m]: if you screenshot, do it with screenfetch :D
21:10 Wayke91 well, i typed the whole thing and....it worked
21:10 marin[m] yeah
21:10 Wayke91 nothing like being lazy to create extra work right hah! thanks for pointing it out
21:10 marin[m] the red dot :))
21:11 marin[m] https://imgur.com/fTcAauc
21:11 glusterbot Title: Imgur: The most awesome images on the Internet (at imgur.com)
21:12 marin[m] sorry, looks like crap for some reason, but you can still see the red dot
21:12 marin[m] :)
21:12 gunix marin[m]: what os are you on ?
21:12 marin[m] Ubuntu 16.04, using Firefox
21:13 marc_ marim[m], any idea on the source of my problem?
21:14 omie888777 joined #gluster
21:14 gunix i see it too but only on the fedora pastebin
21:14 gunix i was curious because i am on archlinux :D
21:14 gunix it has some strange behaviour sometimes
21:16 marin[m] marc_: so what exactly are you trying to achieve?
21:16 marin[m] can you explai the setup?
21:17 gunix marc_: i saw your question from earlier: "Would you consider glusterfs as production-ready?" <---> glusterfs is used in a lot of prod environments and is also fully supported by redhat
21:17 glusterbot gunix: <-'s karma is now -8
21:18 gunix glusterbot: at least i don't tell people to use ubuntu pastebin.
21:19 marin[m] marc_: from what i understand you need a shared filesystem between your docker containers
21:19 marc_ Wayke91, just seen your question above; yes all bricks are online, but not all NFS-Servers
21:19 Wayke91 lol i'm new here, is the bot always so sassy?
21:19 marin[m] i will not go into details how that is a poor design, and why shared filesystems should be avoided like plague :)
21:19 marin[m] but for that you don't need to replicate all files to all containers
21:19 marc_ marin[m], I need replicated filesystem on all docker swarm nodes
21:20 marin[m] containers should just be clients to a gluster .. cluster that stores all the files
21:20 marc_ also I limited access to localhost
21:20 marin[m] marc_: why?
21:20 marin[m] i mean, why do you need all files in all containers?
21:20 marc_ faster, I don't want network to access data
21:21 marin[m] that doesn't even scale, and it has a huge overhead at startup
21:21 gunix marc_: i would share the filesystem on hypervizor level and than point the containers to the mounted folder.
21:21 marc_ yes, my volums contain terabytes
21:21 marin[m] first of all you have to know that the data is still accessed over network
21:21 marc_ I'm managing my multimadia, running jenkins, and mch more
21:21 marin[m] even if you mount glusterfs locally
21:21 marin[m] because it needs to communicate to all other nodes
21:21 marc_ doesn't it access the nearest/fastest replica?
21:22 marin[m] to replicate the data, maintain locks and so on
21:22 marin[m] not sure actually
21:22 marin[m] but that will be true for reads only anyway
21:22 marin[m] for writes
21:22 marc_ yeah, locks ar not the problem, because there is no concurrent access
21:22 marin[m] you actually have to access ALL the other nodes
21:22 marin[m] and that won't scale
21:23 marc_ slow write, fast read?
21:23 marc_ that's ok
21:23 marin[m] and also, you cannot write directly to the underlying volume, you have to write to the mount point
21:23 marc_ yes, of course
21:24 marin[m] so ideally in your idea would be that upon startup, a container should copy all the data locally
21:24 marc_ or at access, when needed
21:24 marin[m] well that should work
21:24 marc_ but that's why I declare as many replicas as there are nodes
21:25 marc_ but access is only, when the fiel is seen...
21:25 marc_ unseen files are considere as inexistent by most software...
21:25 marin[m] and is also replicated when it's access, so i don't see your problem there
21:25 marc_ the problem is that lsdoes not show it
21:26 marin[m] i was under the impression that it does
21:26 marc_ so the process does not kwnow that the file exists and could be accesses
21:26 marc_ only after direct access
21:27 marin[m] no, that should work, if you ls the mounted glusterfs it should list all files correctly
21:27 marin[m] even if they are not replicated locally
21:27 marc_ only those that are created after adding the brick ...
21:28 marin[m] does the client have access to the other servers?
21:28 marc_ yes
21:28 marin[m] because the client does the replication
21:28 marc_ all see all
21:28 marin[m] are the names you configured in glusterfs bricks resolvable by the client?
21:29 marc_ yes, DNS ins the lan
21:29 marc_ in the lan
21:30 marc_ here you see the problem: you dont see test.txt unless you directly access it: https://paste.fedoraproject.org/paste/zBvns~nCPmZDAVEQHgxBfA
21:30 glusterbot Title: problem: oldfiles not shown before access - Modern Paste (at paste.fedoraproject.org)
21:30 marc_ the path /var/volumes is the mount point
21:30 marc_ fstab:
21:30 marc_ localhost:/volumes /var/volumes glusterfs defaults,_netdev 0 0
21:31 marc_ you see the problem?
21:31 marc_ same with the not yet visible file test2.txt
21:31 marin[m] yes, that's strange :)
21:31 marc_ YES!
21:31 marin[m] so if you ls specifically it's there, otherwise it doesn't show
21:32 marc_ after one access,it's there
21:32 marc_ see the pastebin
21:32 marc_ sorxx
21:32 marc_ NO!
21:32 marin[m] from your paste it's still not there :)
21:32 marc_ not there,only if I access it directly
21:33 marc_ yes
21:33 marc_ that's again different to that on the other host
21:33 marc_ f**k
21:33 marc_ no, sory, I was wrong
21:34 marc_ yes, only visible on direct access, not in ls
21:34 marc_ moment ...
21:34 marc_ yes
21:34 marc_ direct access works, ls fails
21:34 marin[m] can you paste gluster volume heal VOLNAME info again please?
21:35 marc_ on any server?
21:35 marin[m] yes
21:36 marc_ https://paste.fedoraproject.org/paste/OPaQxglB5PnOeDOdIInmgg
21:36 glusterbot Title: sudo gluster volume heal volumes info - Modern Paste (at paste.fedoraproject.org)
21:36 marc_ This is strange: https://paste.fedoraproject.org/paste/OPaQxglB5PnOeDOdIInmgg
21:36 glusterbot Title: sudo gluster volume heal volumes info - Modern Paste (at paste.fedoraproject.org)
21:36 marc_ sorry
21:36 marc_ This is strange: Status: Der Socket ist nicht verbunden
21:37 marin[m] yeah, so those "need healing"
21:37 marin[m] not yet healed
21:37 marc_ What's that (english: socket not connected)
21:38 marin[m] i don't know...
21:38 marc_ but that's not an involved server
21:38 marin[m] but my advice is to make a standalone cluster and use the docker containers as clients
21:39 marin[m] don't go to production with this
21:39 marc_ English: Status: Transport endpoint is not connected
21:39 marin[m] i don't think gluster was meant to be this dynamic
21:39 marc_ Not Dynamic?
21:39 marin[m] yeah
21:40 marin[m] what happens when the machines die?
21:40 marc_ What das this mean? I thought from description, that glusterfs is highly flexible?
21:40 marin[m] i think everything will hang, waiting for the non-responsive bricks
21:40 marin[m] for a while
21:40 marc_ When a machine dies, then another takes over, container is moved, and data should be replicated too.
21:41 marc_ docker swarm migrates the code, gluster should sync the data
21:41 marc_ that's teh intended use
21:41 marin[m] this is not the way it was designed to be used :)
21:41 marc_ but what was it designed for?
21:41 marin[m] the intended use is to provie a fault tolerant distributed filesystem
21:41 marin[m] not to randomly add and remove nodes
21:42 marc_ yes, isn't that the dsame?
21:42 marin[m] i don't think it guarrantees that the data will be replicated immediately
21:42 marc_ I don't randomly add/remove
21:42 marc_ but a node can go down of course
21:42 marc_ i.e. defect or maintenance
21:42 marin[m] how many nodes do you expect to have?
21:42 marc_ 4
21:43 marc_ well perhaps one more per 1/2-years
21:43 marin[m] i see, so not ... hundreds
21:43 marc_ slow growth not excluded...
21:43 marc_ yes
21:43 marin[m] well then i guess you should see what's the problem with that socket not connected
21:44 marin[m] gluster peer info ?
21:44 marc_ well, now I have at least the english text: Status: Transport endpoint is not connected
21:44 marc_ you mean peer status? all in cluster,connected
21:45 marin[m] and how did you mount the volume?
21:45 marc_ That was a huge problem! I had this rejected again and again and had to remove /var/lib/gluster 10-20 times on all nodes...
21:45 marin[m] guess there's some networking stuff
21:46 marc_ fstab: localhost:/volumes /var/volumes glusterfs defaults,_netdev 0 0
21:46 marc_ they are all on the same 1GB switch
21:46 marc_ direct connection, all next to each other
21:46 marin[m] there's also the docker networking layer
21:47 marc_ other porst
21:47 marc_ ports
21:47 marc_ I plan something like described here: http://embaby.com/blog/using-glusterfs-docker-swarm-cluster/
21:50 marin[m] so, this tutorial tells you to install gluster on the Physical hosts
21:50 marc_ gluster + docker swarm + the combination (swarm-volume on glusterfs)
21:50 marin[m] mount the volume locally, and mount the mounted volume in the docker images
21:50 marc_ yes
21:50 marc_ basically trivial
21:50 marin[m] so there's no glusterfs inside the containers
21:50 marc_ no
21:51 marc_ bind mount
21:51 marin[m] they don't know about that
21:51 marc_ yes
21:51 marin[m] yes, that's ok, are you doing it like this?
21:51 marc_ yes, that's my plan
21:51 marin[m] you said that you want to put gluster inside the containers :)
21:51 marc_ ... if I can trust glusterfs ;)
21:52 marc_ can I somehow force the replication?
21:52 marin[m] yes
21:52 marc_ how?
21:52 marin[m] # gluster volume heal test-volume
21:52 marin[m] Heal operation on volume test-volume has been successful
21:53 marin[m] gluster volume heal test-volume
21:53 marc_ Launching heal operation to perform index self heal on volume volumes has been unsuccessful
21:53 marc_ ?!?
21:53 fury is there a way to set up glusterfs for a kubernetes cluster if i only have two servers available that i can give it raw block devices? i have 3 servers, but one of them has all 4 drive slots taken up by a raid 1 and it's running my website right now with apache... i want to switch over to kubernetes and containerize all my stuff (site, apps, etc.) so trying to figure out how to store data for it
21:54 marc_ similar to what I am doing, fury, but I use docker swarm
21:54 marin[m] fury: you can have 2 "data" nodes and 1 "arbiter" node with no data on it
21:55 marin[m] but it will participate in quorum
21:55 marin[m] so you should be good
21:55 marin[m] marc_: you have connectivity issues
21:55 marin[m] is there any firewalls?
21:55 marc_ hmm, I can try to reboot pulsar, it seems toconcentrate to be a problem there
21:56 marin[m] not sure a reboot would do anything
21:56 marc_ it was just updatedtoubuntu 16.04
21:56 marin[m] please see if htere is any firewall running
21:56 marin[m] service ufw status
21:56 marin[m] ori service ufw stop
21:56 marc_ notfirewall
21:56 marc_ marin[m], at least I have some new traces to folloe. Thank's
21:57 marc_ good night, marin[m], thank you
21:57 marin[m] you're welcome
21:57 marin[m] good night
22:03 arcadian joined #gluster
22:10 arcadian Hello, Wanted to confirm of a behavior that I noticed and does not seem to be documented anywhere. When I check stats of a file on glusterfs, I notice the access time changes with time even though no other process is reading it. Is it possible gluster is reading the files....
22:19 wushudoin joined #gluster
22:39 freephile joined #gluster
23:29 jbrooks joined #gluster
23:52 bit4man joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary