Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 JoeJulian AdStar: this is happening on your brick logs, right?
00:07 AdStar yeah
00:07 AdStar single log in here /var/log/glusterfs/bricks/bunker-brick1-big_d.log
00:08 AdStar I need to manually delete it and issue a gluster log big_d rotate to get it back down. it's currenlty 8GB and growing... :(
00:09 JoeJulian That must be happening really fast.
00:10 JoeJulian btw... and easier way is "truncate --size=0 /var/log/glusterfs/bricks/bunker-brick1-big_d.log"
00:11 JoeJulian copytruncate is a much more reliable way to rotate logs, imho.
00:11 wadeholler joined #gluster
00:12 AdStar ty I'll make a note, but yeah it's kinda annoying "dr google" said something about making sure I had the zlator entires in m .vol files which I do...
00:12 AdStar it is a pretty out of the box setup. nothing fancy
00:12 JoeJulian So this is supposed to watch for changes to attributes and throws that warning if the dict it's supposed to be checking, the string it's supposed to check for, or the function it's supposed to perform are null. It shouldn't get there.
00:13 JoeJulian Is that the new brick or the old one?
00:14 AdStar node1 brick. So old brick with data doing a heal to new brick. node1 has about 31TB of data -> replicating to node2 which was empty
00:15 JoeJulian Oh, 3.7.11... humor me and try setting each of cluster.metadata-self-heal, cluster.data-self-heal, and cluster.entry-self-heal off and see what that does to the problem.
00:15 AdStar will that effect the current heal (I don't want to break the heal) it's been running for days now...
00:15 JoeJulian That'll just disable client-side self-heals. The self-heal daemon will still function and perform the heals.
00:16 JoeJulian Unless you have no clients.
00:16 AdStar it's a local mounted ctdb/samba cifs share
00:16 AdStar the gluster volume is active on node1
00:16 AdStar mount via gluster-fuse client
00:16 JoeJulian Yeah, should be good then.
00:21 AdStar ok they are off, will clear the log and monitor :)
00:22 cloph adding a brick to a  replica 2 (with sharding) on 3.7.11 failed hard for me. No matter wehther the client heal options were disabled or not, adding another brick while increasing replica to three failed. VMs running with images on that volume had read-failures and mounted their disks r/o...
00:23 cloph solution was to create a new volume and migrate stuff over...
00:23 cloph or rather workaround...
00:23 JoeJulian no sharding on this one, though. Straight up replicate.
00:24 JoeJulian Ok, I'm off to try to organize the garage a bit more so I can actually get to the tools I need. TTFN
00:25 AdStar hmm no luck still fill the log..
00:25 AdStar https://paste.fedoraproject.org/409398/
00:25 glusterbot Title: #409398 Fedora Project Pastebin (at paste.fedoraproject.org)
00:25 AdStar ohh thanks for you advice JoeJulian :)
00:28 ZachLanich joined #gluster
00:37 Javezim With a Replica 3 Scenario, is it possible to get split brain across all 3 Bricks? If two bricks had the same data but the 3rd brick was split brain, would Gluster know to pick the two that are the same?
00:40 ZachLanich Hey guys, can someone help me out with deciding on the best Gluster achristecture choice for my use case/
00:40 ZachLanich ?*
00:45 dlambrig joined #gluster
00:46 wadeholl_ joined #gluster
01:02 BubonicPestilenc ZachLanich: aren't here just 3 choices?
01:02 ZachLanich BubonicPestilenc Well, yes, sort of. I'm still trying to wrap my head around the benefits of each, other than the straight forward stuff in the docs.
01:03 ZachLanich BubonicPestilenc Care to take a look at what I'm trying to accomplish?
01:03 BubonicPestilenc shoot it here
01:03 BubonicPestilenc i'm still noob
01:03 chirino joined #gluster
01:05 ZachLanich Here's a writeup I sent to the mailing list. You can peer into that to see what I'm trying to do: https://gist.github.com/zlanich/9f475d748adb491cb53558cd8ad186ad
01:05 glusterbot Title: Gluster Needs · GitHub (at gist.github.com)
01:06 BubonicPestilenc 1/2 for me
01:06 ZachLanich So my initial thoughts are that Option 2 would be best for me unless the Gluster folks tell me otherwise
01:06 BubonicPestilenc #3 -> hate managing small "duplicate" tasks
01:06 ZachLanich 1/2?
01:07 BubonicPestilenc 1 or 2
01:07 ZachLanich Ah ok, yea. #3 scares me.
01:07 ZachLanich I have to do that for my runtime app containers, but that's WAY easier to manage lol.
01:09 derjohn_mobi joined #gluster
01:09 ZachLanich So my goals with a DFS are: 1. High Availability w/ at least an allowance of 1 down node (assuming ~3 gluster nodes) 2. Scalability - Ability to add more space/nodes over time 3. Performance - I won't be hitting the FS super often as there will be caching in front of it, but it has to be acceptably fast for website hosting.
01:09 ZachLanich I'm not sure what the norm is as far as what to expect from Gluster performance wise compared to a local HDD, so I don't really have a baseline.
01:10 BubonicPestilenc what data takes most space?
01:10 BubonicPestilenc i would count # of files <=>10MB
01:10 ZachLanich That leads me to my Architecture choice. It's either Replicated or Distributed Replicated for me I assume.
01:14 BubonicPestilenc naah, im not too experienced to give advices
01:14 ZachLanich Well, breaking down the files, we have 1. PHP files, which will mostly be cached in OpCode, so I'd imagine them being hit once in a while on initial load, but not super often. 2. Images - Mostly cached by a CDN, but will initially get pulled from the FS. 3. Media files that don't typically get cached by a CDN (perhaps PDFs, etc). So I'd say most of those would be in the 5-10MB area?
01:15 BubonicPestilenc for 5-10 i would prefer pure replica
01:15 BubonicPestilenc always try to use simple solution
01:15 BubonicPestilenc before going deeper
01:17 ZachLanich BubonicPestilenc So is it possible to go from a pure Replica 3 (w/ 3 nodes, possibly one an arbiter if possible) to a Distributed Replicated down the road without it being a nightmare?
01:18 BubonicPestilenc no idea :D
01:28 wadeholler joined #gluster
01:39 shdeng joined #gluster
01:46 cloph ZachLanich: what is bad for php and the like is not file reads, but file misses... https://joejulian.name/blog/dht-misses-are-expensive/
01:46 glusterbot Title: DHT misses are expensive (at joejulian.name)
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 ZachLanich cloph Brilliant. That's very good information. That's the kind of stuff that would have taken me a year to find lol. I will keep that in mind. I feel like the most common (and perhaps no super frequent) case for this would be function calls like "file_exists()"? FYI, I'll be running Varnish for unauthenticated traffic and OpCode too. Thoughts?
01:56 cloph caching surely will help, but /me also is no expert, just small-scale user. I'd probably go with option 2
01:56 cloph if it is the same spindles the bricks reside on, I don't think there's much difference in performance compared to having one single large volume (but again, no own experience/benchmarks on that)
01:59 ZachLanich cloph Do you know how access control is handled in Gluster at a volume level? I can't really use ACLs in this case because the app runtime containers are their own environments, so they'd have root user within the container. Can there be different SSH Keypairs per volume or something?
01:59 ZachLanich Worst case, I can mount all the volumes on the app host and use lxc device add to mount the volume into the container from the top.
02:00 hagarth joined #gluster
02:00 cloph you can restrict who can mount with auth.allow and auth.reject options to restrict/allow from certain IPs only.
02:02 ZachLanich cloph Interesting. The containers will have their own ips on the subnet inside the app host on 10.0.4.0/24, so would it work just to retrict based on that ip, or would the gluster server(s) only see the ip of the app host?
02:03 Javezim Adstar, Ive just checked on of ours running 3.7.11 and its also getting the same warning in the brick logs
02:04 Javezim @JoeJulian, We already have these three options - cluster.metadata-self-heal: off
02:04 Javezim cluster.entry-self-heal: off
02:04 Javezim cluster.data-self-heal: off
02:04 Javezim Any idea what else it could be?
02:18 Lee1092 joined #gluster
02:20 poornimag joined #gluster
02:43 AdStar Thanks for checking Javezim, my issue is my boot machine is only a 20GB drive. so it fills up pretty quick, I'm sure it's just due to the heal, but still very annoying
02:50 aspandey joined #gluster
02:52 DV_ joined #gluster
03:02 Gambit15 joined #gluster
03:05 ZachLanich Hey guys, is it possible to change a Replicated replica 3 node setup to a Distributed Replicated replica 2 4 node setup?
03:14 magrawal joined #gluster
03:44 unstence joined #gluster
03:50 kramdoss_ joined #gluster
03:59 atinm joined #gluster
03:59 itisravi joined #gluster
04:29 shubhendu joined #gluster
04:30 poornimag joined #gluster
04:30 delhage joined #gluster
04:34 nbalacha joined #gluster
05:10 msvbhat joined #gluster
05:10 ashiq joined #gluster
05:13 kdhananjay joined #gluster
05:13 skoduri joined #gluster
05:13 Philambdo joined #gluster
05:13 nbalacha joined #gluster
05:16 Manikandan joined #gluster
05:19 ndarshan joined #gluster
05:28 aspandey joined #gluster
05:30 RameshN joined #gluster
05:32 karthik_ joined #gluster
05:35 rafi joined #gluster
05:36 anil joined #gluster
05:37 jkroon joined #gluster
05:37 ankitraj joined #gluster
05:38 ankitraj joined #gluster
05:38 Muthu_ joined #gluster
05:39 rafi joined #gluster
05:41 ppai joined #gluster
05:43 atalur joined #gluster
05:43 ramky joined #gluster
05:44 msvbhat joined #gluster
05:46 kotreshhr joined #gluster
05:46 Bhaskarakiran joined #gluster
05:47 mhulsman joined #gluster
05:50 hackman joined #gluster
05:59 hgowtham joined #gluster
06:01 aravindavk joined #gluster
06:03 satya4ever joined #gluster
06:04 msvbhat joined #gluster
06:17 sanoj joined #gluster
06:19 squizzi joined #gluster
06:20 kshlm joined #gluster
06:23 hchiramm joined #gluster
06:26 hackman joined #gluster
06:29 jtux joined #gluster
06:30 devyani7 joined #gluster
06:31 karnan joined #gluster
06:32 msvbhat joined #gluster
06:39 [diablo] joined #gluster
06:41 hchiramm joined #gluster
06:46 msvbhat joined #gluster
06:52 poornimag joined #gluster
06:56 noobs joined #gluster
06:57 msvbhat joined #gluster
06:59 shdeng_ joined #gluster
07:04 shiro123 joined #gluster
07:04 kdhananjay joined #gluster
07:04 jiffin joined #gluster
07:07 shiro123 hi, i expanded and rebalanced a distributed volume with a  brick, but the clients dont see the size change, is that normal?
07:07 shiro123 running gluster 3.7 and fuse client
07:10 mhulsman joined #gluster
07:14 devyani7 joined #gluster
07:20 Philambdo joined #gluster
07:24 jkroon joined #gluster
07:24 derjohn_mobi joined #gluster
07:29 jri joined #gluster
07:30 poornimag joined #gluster
07:34 fsimonce joined #gluster
07:37 shdeng joined #gluster
07:43 MikeLupe joined #gluster
08:01 nbalacha shiro123, why do you say the clients dont see the change?
08:02 shiro123 nbalacha: when i do a df -h the size displayed is stoll the size of the first brick
08:02 shiro123 still*
08:03 nbalacha shiro123, how many bricks in the volume?
08:03 shiro123 2
08:03 shiro123 i created with the first and added the second one later
08:04 nbalacha shiro123, when you create files, do they go to the second brick as well?
08:04 nbalacha shiro123, can you provide the gluster volume info
08:04 shiro123 nbalacha: let me check
08:13 shiro123 nbalacha: i created a 2 gb file, fut it seems to go on the first brick
08:13 shiro123 but*
08:13 nbalacha shiro123, can you try touching several files?
08:13 nbalacha shiro123, some of them should go to the second brick
08:14 shiro123 ok, some files create a input/output error, others dont
08:16 nbalacha shiro123, did the rebalance complete successfully?
08:17 mbukatov joined #gluster
08:20 shiro123 nbalacha: the fix layout part was already successful, the data migration is still running
08:22 shiro123 nbalacha: is the data migration part neccessary by the way? if i add more data shouldnt it be pushed on the empty bricks?
08:23 nbalacha shiro123, yes, some new data will be on the new brick
08:23 nbalacha the rebalance is there to distribute the data according to the new hash ranges
08:24 nbalacha some new files will still go to the first brick
08:25 shiro123 nbalacha: ok, so as a consequence i need to redistribute the data again after adding each brick
08:26 nbalacha shiro123, rebalance will redistribute the data
08:26 shiro123 i thought moreof it as apending the second brick
08:26 nbalacha shiro123, it is not mandatory but if you dont you will probably have issues where the original bricks have too many files
08:27 nbalacha and you would probably end up with linkto files on the new brick for those files which are not on their hashed brick
08:27 nbalacha shiro123, files will still go to both bricks - it is not that all new files will only go to the newly added brick
08:28 Bhaskarakiran joined #gluster
08:28 nbalacha shiro123, that still does not explain the df  issue though
08:30 shiro123 nbalacha: right, so after i did the fix layout it should usually show in df -h? or do i need to remount th share from a client?
08:30 nbalacha shiro123, you should not need to remount
08:30 shiro123 the io-error worries me a bit too
08:31 nbalacha shiro123, but the EIO error indicates that the hashed subvol was not found
08:31 nbalacha shiro123, can you send me the fuse client log file
08:32 shiro123 nbalacha: ok, i try to locate it
08:32 nbalacha shiro123, should be in /var/log/glusterfs
08:32 Slashman joined #gluster
08:33 andrey_ joined #gluster
08:34 rastar joined #gluster
08:38 deniszh joined #gluster
08:39 [diablo] joined #gluster
08:39 shiro123 nbalacha: found the problem, the client could not resolve the name of the second brick over dns
08:39 shiro123 now its working
08:39 nbalacha shiro123, good :)
08:39 shiro123 nbalacha: thanks for the help and insight so far :)
08:39 nbalacha shiro123, happy to help
08:55 berkayunal joined #gluster
09:02 bunal joined #gluster
09:06 bunal_ joined #gluster
09:06 bunal left #gluster
09:08 Sebbo2 joined #gluster
09:09 kaushal_ joined #gluster
09:11 ira_ joined #gluster
09:12 msvbhat joined #gluster
09:13 bunal_ Hi guys,
09:13 bunal_ I have a (GlusterFS Single Server NFS Style) gluster server. And i have 1 client connecting to it. So basically 1 server and 1 client. Gluster server is a single one so no replication). This setup was working all ok until now. In the client i have a mounted volume and a folder under it. Example: storage-pool/site/content. When i try to ls this directory in the client ı get Input/Output error. If i delete some files from the gluster server for this
09:13 bunal_ folder and get back to client again ls for the folder works. So if some files exists in that folder ls is broken and i am getting Input/Output error. Hope i could explain myself? Are there any recommendations? Any help would be appreciated much. Thanks
09:19 karnan joined #gluster
09:21 karnan_ joined #gluster
09:22 ppai joined #gluster
09:23 kramdoss_ joined #gluster
09:31 arcolife joined #gluster
09:43 kdhananjay joined #gluster
09:48 Philambdo joined #gluster
09:52 Klas just curious, why use glusterfs single-node? I can't think of any reason to do it.
09:52 Klas (And, sorry, can't help you)
09:52 kovshenin joined #gluster
09:54 cloph Klas one use is if you  want to use it just  for geo-replication..
09:56 ndevos bunal_: its rather difficult to say something about that with so little details.. did you check /var/log/gluster/nfs.log for hints?
09:57 Klas cloph: ah, of course
09:58 Klas hmm, actually quite a nice idea, would've had use for that in my last job
09:58 kshlm joined #gluster
09:58 rwheeler joined #gluster
09:59 bunal_ The reason for using in this form is the permissions. I could use NFS directly but in that case i need to match uid and groupsid for the permission to work
09:59 Klas ah, smart
10:00 bunal_ So gluster solves the pain but here came the new pain ;)
10:02 Klas just curious, have you worked directly with the system on the server when adding files?
10:02 Klas or have you done the correct thing and only worked on it while mounted?
10:04 bunal_ ndevos in the client i got this in the log file: 0-volumeron-client-0: remote operation failed: No such file or directory
10:05 bunal_ But if delete some files from the glusterserver for that folder ls starts to work again
10:05 Klas this is why I am asking that question, are you adding the files directly on the server, without mounting it first?
10:05 hackman joined #gluster
10:06 Klas cause that WILL create problems
10:06 ndevos bunal_: you should create/delete files on the brick directly, always use a glusterfs-client (nfs-server or fuse-mount)
10:06 ndevos s/should/should NEVER/
10:06 glusterbot What ndevos meant to say was: bunal_: you should NEVER create/delete files on the brick directly, always use a glusterfs-client (nfs-server or fuse-mount)
10:07 bunal_ Mount is used by CMS and only upload are done with cms on the client server (mount)
10:07 bunal_ There are some other developers but...
10:08 bunal_ So if someone created a file on the glusterserver storage folder it may be the problem?
10:08 Sebbo2 If I delete a volume, does it only delete the volume or also the data on the different disks (bricks)?
10:08 hackman joined #gluster
10:09 bunal_ So adding filesdirectly on the glusterserver storage directly can cause this issue?
10:09 Klas yes
10:09 ndevos bunal_: gluster will not be able to track new/deleted/modified files when things are done behind its back, things will go wrong then
10:10 bunal_ any ways to heal it?
10:10 hackman joined #gluster
10:10 bunal_ Deleting the folder on the glusterserver and recreating the folder from the client would help?
10:11 ppai joined #gluster
10:11 Klas my guess is that it's easiest to recreate the volume and readd the data
10:11 Klas through a client
10:11 Klas you've basically created a corrupted database
10:11 bunal_ Let me try that.
10:12 bunal_ Thanks klas and ndevos
10:12 Klas readding data means the files, not the metadata
10:12 Klas and do it from a client
10:12 Sebbo2 bunal_: You could mount the volume on your glusterfs server and upload / move the data into the mounted volume. :)
10:12 Klas won't it have the same read/write issues as the other client?
10:12 Gambit15 joined #gluster
10:13 Klas since it's in an inconsistent state
10:15 Sebbo2 I mean, instead of uploading files on the mount point of a disk, I would mount the volume and upload the data on it's mount point. This is a supported solution to upload data directly on a glusterfs server. Sure, if the volume / data is already inconsistent, you need to repair that volume first.
10:16 Klas ah, yes, of course
10:16 Klas that's what I meant as well =)
10:16 Sebbo2 ^^
10:17 Klas and it seems easier to create a new volume with the correct data than to repair, to me at least (newbie still though)
10:18 Sebbo2 I'm also a newbiew. Working with GlusterFS since last saturday :D
10:19 Klas hehe
10:19 bunal_ Well i am the newbie also
10:20 Klas I initially tried to modify it on the server and noticed it didn't replicate, then I undestood why when reading.
10:20 bunal_ So i'll download the files from the server, destroy the volume, create the volume, mount the volume on the glusterserver itself. Upload the files and hope for the best ? :)
10:21 bunal_ sebbo2 thanks for the server mount it self trick
10:22 Sebbo2 No problem. If you're using Ubuntu, is here a documentation for auto mounts: https://github.com/gluster/glusterfs/pull/47
10:22 glusterbot Title: Workaround script for auto mounts after bootup on Ubuntu by Sebi94nbg · Pull Request #47 · gluster/glusterfs · GitHub (at github.com)
10:23 hackman joined #gluster
10:26 rastar joined #gluster
10:26 bfoster joined #gluster
10:27 archit_ joined #gluster
10:40 msvbhat joined #gluster
10:44 Gambit15 joined #gluster
10:57 hagarth joined #gluster
11:02 atalur joined #gluster
11:05 rafi1 joined #gluster
11:06 jtux joined #gluster
11:11 B21956 joined #gluster
11:17 poornimag joined #gluster
11:22 chirino joined #gluster
11:23 B21956 joined #gluster
11:23 d0nn1e joined #gluster
11:37 skoduri joined #gluster
11:43 R0ok_ joined #gluster
11:51 Sebbo2 Where can I find a list of available volume options with their meaning?
11:53 itisravi Sebbo2: gluster volume set help
11:53 bagualistico joined #gluster
11:53 Sebbo2 itisravi: Ah, cool. Thanks! I've also found this: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/ :)
11:53 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
11:54 rafi REMINDER: Gluster Community Meeting in #gluster-meeting in ~ 5 minutes from now
11:55 Gambit15 joined #gluster
11:56 Sebbo2 rafi: What are the topics for this meeting?
11:56 Klas agenda seems to exist in topic of channel
11:56 Sebbo2 Ok, great
11:57 rafi Klas, Sebbo2: that is right
11:57 rafi Klas++
11:57 glusterbot rafi: Klas's karma is now 1
11:58 Klas oh, nice =)
11:59 jtux left #gluster
11:59 rafi Sebbo2: https://www.gluster.org/pipermail/gluster-users/2016-August/028061.html
11:59 glusterbot Title: [Gluster-users] REMINDER: Gluster Community meeting (Wednesday 17th Aug 2016) (at www.gluster.org)
12:01 julim joined #gluster
12:01 Sebbo2 Thanks
12:06 raghug joined #gluster
12:09 hackman joined #gluster
12:10 Sebbo2 Do I need to copy and paste the /etc/glusterfs/glusterd.vol file to every glusterfs server, which provides the same volume(s) or is there a way to automate it? By using an gluster command for example?
12:10 The_Ball joined #gluster
12:10 dlambrig joined #gluster
12:11 kkeithley Sebbo2: glusterd does that for you when you create (or start) the volume.
12:11 hackman joined #gluster
12:11 The_Ball Hi guys, I keep loosing "client quorum" when I shut down one of three servers. It's an oVirt setup with this volume config: http://paste.fedoraproject.org/409559/14358751
12:11 glusterbot Title: #409559 Fedora Project Pastebin (at paste.fedoraproject.org)
12:11 Sebbo2 kkeithley: I've started and created such a volume, but the file looks like the default one on the other both servers
12:12 The_Ball Ignore the cluster.quorum-count = 1 as the quorum-type is auto
12:12 The_Ball I was just testing. The logs are complaining about client-quorum being lost, not server-quorum if that matters
12:14 hackman joined #gluster
12:14 Klas Sebbo2: how did you create the volume?
12:14 Klas you say which peers should host the volume in the create command
12:15 Sebbo2 Klas: By using the command "volume create" and then I've added the volume to the /etc/glusterfs/glusterd.vol file. I've stopped and started the volume after that.
12:17 Sebbo2 The command "gluster pool list" returns all three hosts with the state "connected".
12:17 Klas gluster vol info?
12:18 Sebbo2 It's a distributed and started volume with three bricks on three servers
12:18 Klas does it show severl bricks?
12:18 Klas ok then!
12:18 Klas just checking the simplest things ;)
12:19 Sebbo2 Or do I explicit need to tell the glusterfs-server service to use the glusterd.vol?
12:20 kdhananjay1 joined #gluster
12:20 kkeithley huh? you don't add volumes to  /etc/glusterfs/glusterd.vol file
12:20 kkeithley you're doing something very wrong
12:21 kkeithley what linux dist are you using? Are you building from source or installing packages?
12:21 Sebbo2 Huh, not? I thought... :(
12:21 hackman joined #gluster
12:21 Sebbo2 Ubuntu 16.04 LTS
12:21 Sebbo2 Installed packages
12:22 DV_ joined #gluster
12:22 johnmilton joined #gluster
12:22 nishanth joined #gluster
12:22 hackman joined #gluster
12:23 kkeithley `gluster volume create foo host1:/path-to-brick host2:/path-to-brick should create its own vol file in /var/lib/glusterd/vols/foo/foo.tcp-fuse.vol
12:23 kkeithley on both hosts
12:24 noobs joined #gluster
12:24 hackman joined #gluster
12:26 Sebbo2 Ahhhhh, this is the vol file, which will be copied. Yeah, this does exist on all servers. And for what is the /etc/glusterfs/glusterd.vol used for?
12:29 ppai joined #gluster
12:30 kdhananjay joined #gluster
12:31 Muthu_ joined #gluster
12:32 kshlm Sebbo2, That's basically the config file for glusterd
12:32 hackman joined #gluster
12:33 Sebbo2 kshlm: Ok, is it useful for anything else? Can I add other stuff to improve glusterd?
12:33 aspandey joined #gluster
12:33 kshlm You generally don't mod that file.
12:34 Sebbo2 kshlm: Ok, good to know. I just want to understand GlusterFS. Thanks for the information! :)
12:34 hackman joined #gluster
12:35 rwheeler joined #gluster
12:37 hackman joined #gluster
12:39 hackman joined #gluster
12:40 johnmilton joined #gluster
12:41 hackman joined #gluster
12:42 hackman joined #gluster
12:42 rastar joined #gluster
12:44 hackman joined #gluster
12:45 hackman joined #gluster
12:49 hackman joined #gluster
12:52 unclemarc joined #gluster
13:02 kdhananjay joined #gluster
13:10 ghollies joined #gluster
13:11 Sebbo2 kshlm: Great blog post, but I also could use wildcard SSL certificates, or? https://kshlm.in/post/network-encryption-in-glusterfs/
13:11 glusterbot Title: Setting up network encryption in GlusterFS | kshlm's blog (at kshlm.in)
13:12 kshlm Sebbo2, I'm not sure, but I don't think wildcard certs would work.
13:15 Sebbo2 Mhmm, ok. I believe, I'll leave it without encryption, because access is only granted within the LAN.
13:18 jtux joined #gluster
13:18 ghollies I'm running into an issue where if a file lock command is issued (flock for example). It happends when a brick is rejoining a replicate 3 volume after being restarted. There is a period of time where the lock command issued on a file that was created while it was down will fail with "No such file or directory", even though the other two nodes have the file
13:20 nishanth joined #gluster
13:22 ws2k3 joined #gluster
13:23 archit_ joined #gluster
13:23 ghollies while the brick is down, everything is fine, and a couple seconds after its up, its fine. It seems to pass the lock command down to the brick without the file during that window during brick re-connection
13:26 rafi1 joined #gluster
13:36 hagarth joined #gluster
13:36 nishanth joined #gluster
13:59 [diablo] joined #gluster
14:08 msvbhat joined #gluster
14:10 MikeLupe joined #gluster
14:15 rafa_ joined #gluster
14:16 DV_ joined #gluster
14:16 rafa_ Hello everyone. Quick question. Is version 3.8 already stable?
14:19 ndevos rafa_: it should be
14:20 ndevos rafa_: 3.8.2 has been released last week, if you hit problems, you can file a bug or email the users list
14:20 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:20 ndevos or, well, ask about it here :)
14:21 kshlm joined #gluster
14:22 plarsen joined #gluster
14:23 rafa_ I see, thanks a lot  :)
14:26 cloph what is the upgrade path like? can you mix versions or do you have to  stop everything for update?
14:26 squizzi joined #gluster
14:33 derjohn_mobi joined #gluster
14:34 post-factum cloph: rolling upgrade works
14:34 cloph nice
14:40 hchiramm joined #gluster
14:42 poornimag joined #gluster
14:45 ghollies joined #gluster
14:59 hagarth joined #gluster
15:26 hackman joined #gluster
15:31 wushudoin joined #gluster
15:39 congpine joined #gluster
15:40 congpine hi can anyone please quickly explain what is really happen if I run "replace-brick" with/without commit force for a distribute-replicate volume?
15:40 congpine our bricks pair are up and running fine, I'd like to upgrade the size.
15:41 congpine i.e: br01 / br02 . if I replace br01 with br03, will it transfer data from br01 -> br03 or data will be synced from br02--> br03 ??
15:41 glusterbot congpine: br02's karma is now -1
15:46 JoeJulian congpine: I think even the deveopers don't know the answer to that. Originally, it was intended that the data would transfer from 1 to 3 while still maintaining the replication from 2 to both 1 and 3. Apparently that doesn't work so they decided it would be safer to just deprecate that and have you commit force, leaving the replica count dangerously low and making a new replica from 2 to 3.
15:46 * JoeJulian grumbles.
15:47 congpine ouch !!
15:48 om joined #gluster
15:49 JoeJulian If you only have 2 bricks, just use "add-brick replica 3". When complete, "remove-brick replica 2"
15:49 congpine so you recommend to increase the replica count from 2 to 3.. adding br03. after the sync change it from 3 to 2 again ?
15:49 JoeJulian It's a suggestion.
15:51 congpine right, so that leads to another question: if the volume has replica count of 2 and increase to 3 without add new brick, will it fed up gluster and causing any "fix-layout", "index/scanning"? i'm runing 3.5
15:51 JoeJulian no
15:52 JoeJulian fixing the layout is only for increasing or decreasing dht subvolumes, not replica.
15:57 msvbhat joined #gluster
16:09 ZachLanich joined #gluster
16:23 om joined #gluster
16:31 kramdoss_ joined #gluster
16:35 kpease joined #gluster
16:42 skoduri joined #gluster
17:23 kovshenin joined #gluster
17:23 msvbhat joined #gluster
17:25 DV_ joined #gluster
17:39 derjohn_mobi joined #gluster
17:53 hagarth joined #gluster
18:21 nathwill joined #gluster
18:21 pampan joined #gluster
18:28 pampan Hi guys. I think I have a memory leak on gluster self healing daemon, I'm using 3.5.7 and the pool has two nodes. Everything was working fine until last Friday, when I removed from the pool a third node. Now the glusterfshd is leaking memory until is killed, (I guess by glusterd, because I don't see the OOM killer in action). Any ideas of how can I get the healing daemon back to normal?
18:32 pampan https://i.imgsafe.org/4ab2499dc7.png <--- here you can see, at the beginning it was normal. Then I've removed the third node on the pool and memory usage started growing. All the memory was being used by glustershd. Then the process was killed or gracefully finished, I don't know, and now it's starting to grow again.
18:32 glusterbot pampan: <-'s karma is now -3
18:45 kovsheni_ joined #gluster
18:51 nathwill joined #gluster
18:54 kovshenin joined #gluster
18:59 loadtheacc joined #gluster
19:05 dlambrig left #gluster
19:08 ben453 joined #gluster
19:08 JoeJulian Poor <
19:09 JoeJulian pampan: have you checked the self-heal daemon's log? /var/log/glusterfs/glustershd.log
19:29 mhulsman joined #gluster
19:35 kpease joined #gluster
19:39 kovshenin joined #gluster
19:46 hagarth joined #gluster
19:53 squizzi joined #gluster
20:06 nathwill joined #gluster
20:07 pampan JoeJulian: no error messages there, just informational message for when the daemon starts
20:07 pampan it prints the Final graph and then there is no more information, until it restarts again and print the same informational messages
20:09 Gambit15 JoeJulian, the Gluster guide creates separate LVs for the LV pool's data & metadata. Any particular reason?
20:09 pampan now, gluster healing daemon tells me that volume is healed, but I see a lot of files in brick/.glusterfs/indices/xattrop/ of each of the volumes
20:09 pampan on of them have more than 250000
20:09 Gambit15 The RH Storage guide has them on the same LV...
20:09 pampan *one
20:21 squizzi joined #gluster
20:37 squizzi__ joined #gluster
20:45 om joined #gluster
20:45 deniszh joined #gluster
21:09 B21956 joined #gluster
21:21 dnunez joined #gluster
21:27 mhulsman joined #gluster
21:43 pampan joined #gluster
21:49 congpine joined #gluster
21:50 congpine thanks JoeJulian. I will do some testings on local VM to increase the replica
21:50 congpine i'm trying to set some options to the volume but I hit with this error:
21:50 congpine volume set: failed: Staging failed on 192.168.10.10. Error: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
21:50 congpine as far as I can see, all clients running 3.5.4, same as server
21:51 congpine I tried to set server.statedump-path to /tmp but failed with the same error. So i cant really see what is wrong.
21:51 congpine how can I find out which client has incompatible version?
22:05 ZachLanich joined #gluster
22:38 bluenemo joined #gluster
22:53 AdStar joined #gluster
23:03 plarsen joined #gluster
23:13 om joined #gluster
23:26 ZachLanich Hey everyone, I have some tough outstanding questions regarding a big project I'm working on that involves Gluster. Can some of these talented folks in here give me some advice? I took a lot of time to create a clear breakdown
23:27 ZachLanich of my situation so it's easier for you to wrap your head around what I'm trying to accomplish: https://notehub.org/q25ii
23:27 glusterbot Title: NoteHub GlusterFS Use Case (at notehub.org)
23:29 JoeJulian Personally, with the present gluster version, I'd probably avoid volume-per, especially since you're keeping them in containers, and rather just bind-mount the container to their subdirectory on a client mount.
23:30 JoeJulian I assume you've already read my blog article on dht misses.
23:30 ZachLanich @JoeJulian I linked to your article in that writeup :)
23:31 JoeJulian There are a few mitigation techniques in the summary.
23:31 ZachLanich I will certainly be taking every bit of advice in that article first off. Would you advise turning lookup-unhashed to "off", or can that cause issues?
23:32 JoeJulian Wordpress is stupid and creates temp files then renames them, so that /might/ cause issues. I'd test it thorougly.
23:32 JoeJulian (and I'd spell thoroughly correctly)
23:33 ZachLanich Would it be possible to symlink parts of the bind-mounted volume on the runtime container's hdd so it avoids writing temp files to the Gluster vol?
23:33 ZachLanich :P
23:33 JoeJulian If you avoid fix-layout upon expansion and, instead, do a full rebalance (if you add distribute subvolumes) I /think/ you'll be ok.
23:34 JoeJulian Sure, if you symlink something, the container will process that symlink (not the host) and it will point to whatever's in the container.
23:35 JoeJulian Very similar to chroot.
23:35 aj__ joined #gluster
23:37 ZachLanich @JoeJulian The docs seem to make "fix-layout" sound like a good thing, can you explain why you'd avoid it so I'm clear on the underlying logic?
23:38 ZachLanich @JoeJulian Re: the chroot thing, I think I can tackle the biggest offenders for temp files with the symlink approach. I'll do my homework on which links to create. It's only when plugins write non-temp data like config outside of the uploads folder that the data has to be migrated to every runtime container most likely. I hate it as much as you do lol.
23:38 JoeJulian When you do fix-layout, you change the directory hash allocations. I do not believe that any files that are in that directory are checked to see if there's no longer a hash match (thus creating the dht link file). So a hashed lookup is done, the file is not found, and we've disabled lookup-unhashed so that's it. The file is not seen.
23:40 ZachLanich @JoeJulian AHH, ok so you're saying I can improve performance by disabling lookup-unhashed for DHT misses and it as long as I avoid fix-layout, I won't end up with broken file locations? Would this have serious performance implications in the long run by not fixing layout after expanding the volume a number of times?
23:41 JoeJulian I won't say "won't", but afaict you "shouldn't". I don't know every detail of that code.
23:41 JoeJulian No, because you would be doing a full rebalance, which fixes the layout and *then* moves files where they should now belong (or creates the dht link)
23:43 ZachLanich @JoeJulian So you're saying do a full rebalance + migrate as per this part of the docs and all works well with lookup-hashed off? Docs: http://dp.wecreate.com/1gtvo - Cmd: gluster volume rebalance start
23:44 JoeJulian It should, yes.
23:45 ZachLanich @JoeJulian Wow, that makes a lot of sense. You have no idea how much time & potential wormholeing you just saved me lol.
23:47 JoeJulian Be sure and test the hell out of it. I don't have any production record of turning that off.
23:47 JoeJulian And let me know if you find any surprises.
23:47 ZachLanich @JoeJulian I will have to for this project anyways. It's production hosting lol. I'd be in flames if it came crashing down :P
23:47 ZachLanich @JoeJulian So I think I have one more Q regarding containers. Do you foresee any security issues with bind-mounting a subdir of a Gluster vol into a container, using ACLs/gid:uid maps to allow write access to that directory as far as possible access/priv attacks? There's always kernel vulnerabilities too, but I don't think I'm safe from those with containers regardless of whether or not I'm using bind-mount, correct?
23:48 JoeJulian Are you using docker or are you using something better?
23:50 JoeJulian Unless you use a container that utilizes the security enhancements that come with cgroups v2, your security looks something like this: https://twitter.com/joecyberguru/status/728733009067679744
23:50 ZachLanich LXD containers on their own. Docker is just an app-specific container system that runs on the older lxc containers. LXD is Linux's newest wrapper/daemon around lxc containers with a slew of better functionality, but still uses the lxc tools. It's supposedly very production ready and completely isolated.
23:51 JoeJulian I've been enjoying systemd-nspawn, myself.
23:51 ZachLanich It does use cgroups of some sort. I'm not 100% familiar with cgroups' versions.
23:52 ZachLanich I can allocate resource limits and cpu shares, etc too just like VMs. Only thing that's shared is the kernel.
23:53 ZachLanich I'm asking the lxc folks right now if it uses cgroups v2.
23:55 ZachLanich @JoeJulian I'm certain that LXD containers do everything systemd-nspawn does, if not more. I just did a quick readup on systemd-nspawn and it seems that the functionality is very similar.
23:55 ZachLanich There are also some really cool image & profile functionalities to LXD that make it cake to clone/migrate/provision, etc in seconds.
23:55 ZachLanich @JoeJulian I think it's an apples/oranges situation likely.
23:56 ZachLanich @JoeJulian Assuming it's very similar to systemd-nspawn, do you foresee any security issues with bind-mounting a subdir of a Gluster vol into a container, using ACLs/gid:uid maps to allow write access to that directory as far as possible access/priv attacks?
23:57 misc docker no longer use lxc, it direcly use namespace, no ?
23:57 ZachLanich misc It may. I'm not super familiar with Docker's advances recently as I don't use it.
23:57 ZachLanich It used to though.
23:58 misc yeah, around 0.9 if I am not wrong
23:58 ZachLanich No surprise.
23:58 misc but so, you plan to map each lxd to a different uid ?
23:59 misc and each wordpress will be running under its own process and under a custom uid ?
23:59 ZachLanich I'm not 100% against Docker, but I'd much prefer provisioning a container, running Saltstack Boostrap inside it, connecting it to my Master and controlling it like a VM. Way more power. Docker cares too much about "what's in the container" for my use case imo.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary