Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 d0nn1e joined #gluster
00:27 LiftedKilt joined #gluster
00:33 d0nn1e joined #gluster
00:39 d0nn1e joined #gluster
00:47 d0nn1e joined #gluster
00:57 d0nn1e joined #gluster
01:03 d0nn1e joined #gluster
01:09 shdeng joined #gluster
01:10 shdeng joined #gluster
01:10 shdeng joined #gluster
01:12 Champi joined #gluster
01:12 ShwethaHP joined #gluster
01:14 daMaestro joined #gluster
01:14 JoeJulian om2 glusterd (the management daemon) listens on 24007
01:20 shdeng joined #gluster
01:24 shdeng joined #gluster
01:26 shdeng joined #gluster
01:35 bwerthmann joined #gluster
01:37 derjohn_mobi joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 msvbhat joined #gluster
02:48 blu_ joined #gluster
02:51 Gnomethrower joined #gluster
02:51 Lee1092 joined #gluster
02:58 magrawal joined #gluster
02:58 jkroon joined #gluster
03:20 nbalacha joined #gluster
03:28 bwerthmann joined #gluster
03:42 kramdoss_ joined #gluster
03:43 hchiramm joined #gluster
03:55 RameshN joined #gluster
04:01 itisravi joined #gluster
04:10 hgowtham joined #gluster
04:16 jiffin joined #gluster
04:20 atinm joined #gluster
04:25 riyas joined #gluster
04:27 loadtheacc joined #gluster
04:37 shubhendu joined #gluster
04:40 ashiq joined #gluster
04:44 apandey joined #gluster
04:45 ppai joined #gluster
04:48 nishanth joined #gluster
05:01 aravindavk joined #gluster
05:08 gem joined #gluster
05:16 prasanth joined #gluster
05:16 RameshN joined #gluster
05:16 karthik_us joined #gluster
05:16 ndarshan joined #gluster
05:17 Muthu_ joined #gluster
05:21 sanoj joined #gluster
05:28 satya4ever joined #gluster
05:33 Bhaskarakiran joined #gluster
05:40 suliba joined #gluster
05:47 RameshN joined #gluster
05:49 kdhananjay joined #gluster
05:49 hchiramm joined #gluster
05:52 karnan joined #gluster
05:56 arc0 joined #gluster
06:00 shubhendu joined #gluster
06:05 Debloper joined #gluster
06:09 farhorizon joined #gluster
06:14 mhulsman joined #gluster
06:21 sanoj joined #gluster
06:23 rafi joined #gluster
06:23 aravindavk joined #gluster
06:26 poornima_ joined #gluster
06:27 Bhaskarakiran joined #gluster
06:29 shubhendu joined #gluster
06:37 Wizek joined #gluster
06:42 Muthu_ joined #gluster
06:49 Philambdo joined #gluster
06:58 javi404 joined #gluster
07:00 Bhaskarakiran joined #gluster
07:00 ankitraj joined #gluster
07:01 fsimonce joined #gluster
07:03 juo joined #gluster
07:06 jkroon joined #gluster
07:15 msvbhat joined #gluster
07:21 armyriad joined #gluster
07:24 Pupeno joined #gluster
07:33 kxseven joined #gluster
07:50 hackman joined #gluster
07:52 karnan joined #gluster
07:52 haomaiwang joined #gluster
07:53 castel joined #gluster
07:53 castel hi :)
07:54 armyriad joined #gluster
07:57 sandersr joined #gluster
07:57 [diablo] joined #gluster
07:59 ahino joined #gluster
08:11 farhorizon joined #gluster
08:12 devyani7_ joined #gluster
08:20 muneerse joined #gluster
08:25 rafi joined #gluster
08:33 panina joined #gluster
08:40 riyas joined #gluster
08:43 flying joined #gluster
08:44 ahino joined #gluster
08:55 gem joined #gluster
08:57 karthik_us joined #gluster
09:03 Slashman joined #gluster
09:03 ahino joined #gluster
09:04 derjohn_mobi joined #gluster
09:09 panina joined #gluster
09:13 farhorizon joined #gluster
09:29 Saravanakmr joined #gluster
09:40 jiffin joined #gluster
09:47 panina joined #gluster
09:47 titansmc joined #gluster
09:50 jiffin joined #gluster
09:51 titansmc Hi guys, after upgrading from 3.8.0 to 3.8.4 and then to 3.8.5 when I launch glusterd I get /usr/lib64/glusterfs/3.8.0​/xlator/mgmt/glusterd.so: cannot open
09:51 titansmc shared object file: No such file or directory
09:51 titansmc And it is pointing to 3.8.0 path.
09:51 titansmc Any ide on how to fix this ? Just send a mail to the list too
09:52 jiffin joined #gluster
09:53 anoopcs titansmc, What does glusterd --version says?
09:53 kshlm titansmc, The version is hardcoded into the binaries when glusterfs is built.
09:54 kshlm So it seems like you still have 3.8.0 versions of /usr/sbin/glusterfsd (glusterd is just a symlink to this).
09:54 titansmc anoopcs: opsld04:/var/log/glusterfs # glusterd --version
09:54 titansmc glusterfs 3.8.5 built on Oct 20 2016 20:18:33
09:54 titansmc Repository revision: git://git.gluster.com/glusterfs.git
09:54 titansmc Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
09:54 titansmc GlusterFS comes with ABSOLUTELY NO WARRANTY.
09:54 titansmc It is licensed to you under your choice of the GNU Lesser
09:54 titansmc General Public License, version 3 or any later version (LGPLv3
09:54 titansmc or later), or the GNU General Public License, version 2 (GPLv2),
09:54 titansmc in all cases as published by the Free Software Foundation.
09:55 titansmc opsld04:/var/log/glusterfs # ls -ltr  /usr/sbin/glusterfsd
09:55 titansmc -rwxr-xr-x 1 root root 106344 Oct 20 22:18 /usr/sbin/glusterfsd
09:57 jiffin joined #gluster
09:58 anoopcs Binary version is correct I guess.
09:59 jiffin joined #gluster
10:01 titansmc yeah, it seems
10:01 jiffin joined #gluster
10:03 jkroon titansmc, glusterd.so is a symlink?  to a 3.8.0 version?
10:04 titansmc :/var/log/glusterfs # l /usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so
10:04 titansmc -rwxr-xr-x 1 root root 1610920 Oct 20 22:18 /usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so
10:04 titansmc it is not a sym link
10:04 titansmc /var/log/glusterfs # l /usr/lib64/glusterfs/
10:04 titansmc total 52
10:04 titansmc drwxr-xr-x  3 root root  4096 Oct 24 12:54 ./
10:04 titansmc drwxr-xr-x 72 root root 40960 Oct 24 12:54 ../
10:04 titansmc drwxr-xr-x  5 root root  4096 Oct 24 12:54 3.8.5/
10:10 jkroon that all looks fine.
10:10 jkroon what exactly is the command run that errors out?
10:14 farhorizon joined #gluster
10:22 karnan joined #gluster
10:28 Pupeno joined #gluster
10:29 titansmc jkroon: :~ # systemctl restart glusterd.service
10:29 titansmc Job for glusterd.service failed. See "systemctl status glusterd.service" and "journalctl -xn" for details.
10:30 jkroon titansmc, that might not restart the bricks.
10:31 kshlm titansmc, Just to be sure let's check the unit file for glusterd. `systemctl cat glusterd.service`.
10:31 jkroon does gluster volume status function?
10:31 kshlm Maybe it's launching some other glusterd.
10:31 titansmc systemctl cat glusterd.service
10:31 titansmc # /usr/lib/systemd/system/glusterd.service
10:31 titansmc [Unit]
10:31 titansmc Description=GlusterFS, a clustered file-system server
10:31 titansmc Requires=rpcbind.service
10:31 titansmc After=network.target rpcbind.service
10:31 titansmc Before=network-online.target
10:31 titansmc [Service]
10:31 titansmc Type=forking
10:31 kshlm jkroon, It wouldn't. Because glusterd itself isn't running.
10:31 titansmc PIDFile=/var/run/glusterd.pid
10:31 titansmc LimitNOFILE=65536
10:31 titansmc Environment="LOG_LEVEL=INFO"
10:31 titansmc EnvironmentFile=-/etc/sysconfig/glusterd
10:31 titansmc ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid  --log-level $LOG_LEVEL $GLUS
10:32 kshlm Okay. It is looking at the correct glusterd.
10:32 kshlm What happens if you run /usr/sbin/glusterd directly?
10:32 titansmc nothing
10:32 jkroon pidof glusterd?
10:33 titansmc opsld04:~ # /usr/sbin/glusterd
10:33 titansmc opsld04:~ # ps aux | grep glus
10:33 titansmc root      8536  0.0  0.0   9240   928 pts/1    S+   12:32   0:00 grep --color=auto glus
10:33 titansmc opsld04:~ # pidof glusterd
10:33 titansmc opsld04:~ #
10:33 kshlm Check the glusterd log.
10:33 kshlm Is it the same error?
10:33 kshlm titansmc, I might have a clue as to what's happening here.
10:34 kshlm Your glusterfs-libs package possibly haven't been updated correctly.
10:34 kshlm s/haven't/hasn't/
10:34 reno joined #gluster
10:34 glusterbot What kshlm meant to say was: Your glusterfs-libs package possibly hasn't been updated correctly.
10:35 reno_G Hello
10:35 glusterbot reno_G: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:35 kshlm The core gluster libs aren't versioned. ie placed in /usr/lib/glusterfs/<version>/
10:35 kshlm Instead they are directly in /usr/lib/glusterfs
10:35 kshlm These libraries have the functions that perform loading of the translators.
10:36 kshlm So if the libraries are updated incorrectly, the new binaries, ie glusterfsd, will still link to the old libraries and try to load translators.
10:37 reno_G I need help troubleshooting a performance issue in Gluster. I have a trasferrate of 7 Mbit/s when copying some fime on a shared gluster volume
10:37 reno_G I suppose this is not normal, correct?
10:37 kshlm But since old libraries are in use, they will look for translators in the old path, which is no longer present.
10:37 kshlm titansmc, Reinstall glusterfs-libs and ensure they are of the correct version.
10:37 p7mo joined #gluster
10:38 kshlm That should solve your problem.
10:38 kshlm If not, let us know on the mail thread you started.
10:38 kshlm I need to leave now.
10:43 titansmc right guys! gluster libs didn't get updated correctly, removed them and upgrade them again. Cheers!!!!
10:46 overclk joined #gluster
10:58 magrawal joined #gluster
10:59 rafi joined #gluster
11:00 abyss^ It is possible to have some data on disk, then copy this data to glusterfs server and some how this data will be managed by  glusterfs? You know it is faster copy data to disk intead of copy via gluster client.
11:01 abyss^ it is possible or not because each file for example has extended attr which gluster manage so it is impossible to do that? And only way to copy a lot of data from non-gluster to gluster is via client?
11:01 abyss^ or even better to made from non-gluster data gluster data;)
11:01 abyss^ somehow
11:02 msvbhat joined #gluster
11:04 cloph "you know it i s faster"? shouldn't really be much difference and gluster relies on a filesystem with extended attribute support, s o no idea what your actual question is..
11:07 abyss^ cloph: I'd like to avoid rsync from disk to new glusterfs servers
11:07 abyss^ (I suppose I have to rsync do via client only I can't do rsync to glusterfs servers, yes?)
11:08 abyss^ The best option would be not rsync to new server instead just create glusterfs on existing data
11:08 cloph what  you write doesn't really make much sense to me. I'm still missing your point.
11:09 abyss^ So two questions;) 1) It is possible to rsync data to glusterfs not via client 2) It is possible to made glusterfs from existing data
11:09 abyss^ cloph: yeah, maybe because my english
11:09 abyss^ which is not as you can see very well;)
11:09 cloph you always need some client of some sort, you cannot write to brick directory directly.
11:09 blubberdi joined #gluster
11:10 cloph and no, you cannot make glusterfs from a dir that already contains data.
11:10 sanoj git fetch http://review.gluster.org/glusterfs refs/changes/52/15352/3 && git checkout FETCH_HEAD
11:11 cloph apparently some internal server? Not sure who that comment from sanoj was directed to...
11:11 anoopcs wrong window typing :-)
11:11 sanoj sent by mistake cloph
11:11 blubberdi Hello, is it ok or even possible to use an gluster 3.5 server with gluster 3.7 client? I couldn't find something in the documentation regarding this topic.
11:12 hackman joined #gluster
11:13 abyss^ cloph: ok, thank you
11:15 cloph @paste
11:15 glusterbot cloph: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
11:39 arc0 joined #gluster
11:40 derjohn_mob joined #gluster
11:49 arc0life_ joined #gluster
11:54 rafi joined #gluster
11:56 derjohn_mob joined #gluster
11:59 ankitraj joined #gluster
12:10 karthik_us joined #gluster
12:12 arc0 joined #gluster
12:13 johnmilton joined #gluster
12:16 farhorizon joined #gluster
12:32 haomaiwang joined #gluster
12:36 haomaiwang joined #gluster
12:41 guhcampos joined #gluster
12:42 _nixpanic joined #gluster
12:42 _nixpanic joined #gluster
12:45 msvbhat joined #gluster
12:47 ndevos joined #gluster
12:58 kdhananjay joined #gluster
12:59 shyam joined #gluster
12:59 ahferroin7 joined #gluster
13:03 titansmc blubberdi: I am running 2 nodes cluster, one is on 3.8.0 and the other 3.8.5 and both are clients at the same time, it should work
13:03 ira_ joined #gluster
13:05 bfoster joined #gluster
13:06 d0nn1e joined #gluster
13:07 ahferroin7 I've got a 2 node cluster running Gluster 3.7.4 with a replicated volume where one node is having filesystem issues with the brick filesystem, on one node, any advice on how I might be able to rebuild the FS on that node without having to rebuild the gluster volume from scratch?
13:07 nbalacha joined #gluster
13:09 cloph ahferroin7: replace-brick the one with issues, then after having fixed replace back...
13:10 gem joined #gluster
13:23 abyss^ ahferroin7: or, turn off glusterfs on server where fs is broken. Repair FS or add new disk (keep /var/lib/glusterfs) get volume ID from brick (directory) where the glusterfs has not broken fs by attr and put the same id to new brick directory...
13:24 abyss^ start glusterfs and do heal command
13:24 abyss^ should work as well;)
13:28 plarsen joined #gluster
13:30 skylar joined #gluster
13:36 ivan_rossi joined #gluster
13:41 titansmc left #gluster
13:44 hagarth joined #gluster
13:45 derjohn_mob joined #gluster
13:45 msvbhat joined #gluster
13:50 plarsen joined #gluster
13:57 kramdoss_ joined #gluster
13:59 JoeJulian @ping-timeout
13:59 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
14:10 shyam joined #gluster
14:13 kramdoss_ joined #gluster
14:17 titansmc joined #gluster
14:17 titansmc left #gluster
14:17 farhorizon joined #gluster
14:22 johnnyNumber5 joined #gluster
14:27 caleb joined #gluster
14:27 caleb 0cks
14:27 caleb left #gluster
14:28 cshay joined #gluster
14:29 kpease joined #gluster
14:31 aravindavk joined #gluster
14:32 aravindavk joined #gluster
14:34 guhcampos joined #gluster
14:40 f0rpaxe joined #gluster
14:40 squizzi joined #gluster
14:48 farhorizon joined #gluster
14:50 jbrooks joined #gluster
15:11 farhorizon joined #gluster
15:17 ankitraj joined #gluster
15:25 guhcampos joined #gluster
15:28 shyam joined #gluster
15:40 mhulsman joined #gluster
16:00 gem joined #gluster
16:05 ivan_rossi left #gluster
16:12 shaunm joined #gluster
16:13 muneerse joined #gluster
16:43 jiffin joined #gluster
16:50 hackman joined #gluster
16:56 partner dear community.. is there a way to get gluster see the maximum possible file size when thin-provisioning files? i can see from the filesystem with an ls that file is say 1 TB even thought its content is only say 100 gigs
16:56 partner openstack involved here, gluster as storage backend..
16:57 partner problem is when too many files end up to same brick and their combined size is over the brick size..
16:58 partner we even have empty bricks on the volume but due to hashes and initial size (i suppose) there are files located to same bricks..
16:59 armyriad joined #gluster
17:03 JoeJulian du
17:03 partner how the volume file is created there is "truncate -s <size> <path>" - not sure how gluster reads that and locates the files but obviously one can overcommit a bric
17:03 partner yes, du shows the real size, problem is how to get gluster see it..
17:04 JoeJulian Gluster doesn't see any of the sizes. It just passes them along through the client to the application.
17:04 partner so there is nothing in gluster to check that if the file will fit a target brick?
17:05 JoeJulian Which isn't _entirely_ true, gluster does check for min-free when creating new files.
17:05 partner i undertand the overall available diskspace to be checked, there are min-disk-free variables for gluster but..
17:05 partner so which one does it use for min-disk-free check? plain df  ?
17:06 JoeJulian No, exactly the same as a local filesystem, gluster cannot enforce an application does not overcommit.
17:06 JoeJulian good question. I haven't looked at that part of the source. I would expect so.
17:09 partner https://paste.fedoraproject.org/461968/14775881/
17:09 glusterbot Title: #461968 • Fedora Project Pastebin (at paste.fedoraproject.org)
17:10 JoeJulian You *can* set "preallocate-images=space" in nova.
17:10 partner and there is an option on cinder: glusterfs_sparsed_volumes
17:11 partner guess what that does..
17:11 JoeJulian I think that's only for the thin lvm bd volume.
17:11 partner yes, it will make volumes thick, by writing from the cinder host dd if=/dev/zero of=/your/new/volume...
17:12 JoeJulian Oh, cool.
17:12 partner over the network of course, tried it out.. 30 TB of zeroes was being written towards gluster...
17:12 JoeJulian Yeah. Tell me about it.
17:12 partner but this option you presented is new to me, let me have a look
17:13 JoeJulian I've got one customer with 4 x 20TB images on a 60TB brick.
17:13 partner uhh
17:13 JoeJulian /awkward/
17:14 partner of course i could add more bricks and hope rebalance would move things around (i could even calculate based on hashes how many bricks i need to add) but its live volumes, sized 1-2 TB so it will take loads of time and will affect customer
17:14 JoeJulian I'm just waiting for it to fail. Besides giving me an excellent opportunity to say "I told you so" I might be able to upgrade to 3.7 or 3.8 and enable sharding.
17:17 guhcampos joined #gluster
17:18 partner hmph, that option is for nova to allocate the required space.. darn, i was getting my hopes up already
17:19 partner ie. local instance storage..
17:19 JoeJulian It's the same thing if your nova volumes are on gluster.
17:20 partner nova volume? as in instance "local" storage?
17:20 partner i'm talking about cinder volumes here anyways if it was unclear (my bad in that case, i'm in rust..)
17:20 partner sure, that would then bite and there is no flavor of 2TB anyways, nor will be :)
17:20 JoeJulian :)
17:21 JoeJulian I keep my nova volumes on gluster to support live-migration.
17:21 partner but if i say truncate -s 2TB the gluster doesn't really check if the hash-based location has enough space available?? it just checks "yeah min-disk-free is not active, put it here" ???
17:21 JoeJulian Right
17:22 JoeJulian Essentially it works exactly the same way as any other posix filesystem.
17:23 partner but it has these built-in things for almost similar stuff.. i just feel it should do it since it knows where it is putting the stuff.. everything is abstracted from user's point of view..
17:23 aravindavk joined #gluster
17:23 JoeJulian Oh, you might be able to use quotas. I've never used quota but it might be able to do what you're asking.
17:23 partner so umm, just a thought, could a translator of some sort be to the rescue?
17:24 partner not sure how hard those are to write but to really do the above.. get request in, compare the requested size against target brick, refuse and redirect if not enough space, profit ?
17:25 JoeJulian Not really since that translator would have to keep a copy of the entire directory structure with every file's allocation to be able to do all the math.
17:25 partner i'll take a note on the quota but does not sound it will solve anything..
17:25 JoeJulian With your truncate command, note the df doesn't change.
17:25 partner yeah it doesn't.. :(
17:26 JoeJulian I mean it doesn't change on a xfs/ext4/btrfs/zfs filesystem.
17:26 JoeJulian So, of course, gluster would have no way of knowing that space is allocated that isn't really allocated.
17:27 JoeJulian bbiab, gotta pick up my daughter from school. They have half days this week.
17:29 partner just had autumn leave last week
17:29 partner thanks Joe for your comments, as always, much appreciated!
17:29 partner others feel free to join to the puzzle :)
17:31 noiveglusteruser joined #gluster
17:31 noiveglusteruser I need help with gluster client setup
17:32 noiveglusteruser I am using ubuntu and I can't mount volumes on client
17:34 sloop gluster sucks
17:34 ahino joined #gluster
17:38 circ-user-2tw9L joined #gluster
17:43 jiffin noiveglusteruser: can u explain it more
17:43 jiffin ?
17:46 hagarth joined #gluster
17:48 noiveglusteruser jiffin can I sent you a link for stackoverflow question? that has more details?
17:48 jiffin Okay
17:48 noiveglusteruser http://stackoverflow.com/questions/40290​363/cannot-mount-glusterfs-client-ubuntu
17:48 glusterbot Title: amazon ec2 - Cannot mount glusterfs client ubuntu - Stack Overflow (at stackoverflow.com)
17:50 jiffin noiveglusteruser: is ur gluster-client and gluster-server have different versions
17:50 jiffin can u packages which you have installed in those machines?
17:50 noiveglusteruser I think so. I used this on server sudo add-apt-repository ppa:gluster/glusterfs-3.8
17:51 noiveglusteruser and on client I just used apt-get install glusterfs-client
17:51 noiveglusteruser so I didn't get to choose client version
17:56 armyriad joined #gluster
17:58 noiveglusteruser jiffin any ideas on what I should do?
18:03 snehring if that ppa provides glusterfs-client as well, use it
18:04 snehring the error messages seem to suggest you're using an older client
18:04 jiffin noiveglusteruser:  snehring is right
18:05 jiffin install the correct version should solve ur issue
18:11 jiffin joined #gluster
18:12 elastix joined #gluster
18:13 noiveglusteruser ok I will give it a try. I have added ppa on the client but may be I will uninstall the client altogether and try it again
18:21 farhoriz_ joined #gluster
18:21 noiveglusteruser that was it. Thank you snehring & jiffin
18:21 snehring np
18:22 jiffin noiveglusteruser: :)
18:24 hagarth joined #gluster
18:26 JoeJulian Hey sloop, we like to keep things positive here. Please feel free to ask questions and get help.
18:26 jiffin1 joined #gluster
18:29 Philambdo joined #gluster
18:30 guest_ joined #gluster
18:30 rwheeler joined #gluster
18:44 dgandhi joined #gluster
18:54 Philambdo joined #gluster
19:02 jiffin joined #gluster
19:06 jiffin joined #gluster
19:07 snehring joined #gluster
19:07 JoeJulian partner: I could see a possible volume setting that could always perform an fallocate after an ftruncate that could be added to the posix translator. Theoretically it would be pretty easy.
19:08 JoeJulian Oh wait...
19:08 JoeJulian that wouldn't work either.
19:08 JoeJulian maybe
19:09 JoeJulian One place I reads that it fills it with 0s. That would be bad if there was already data. Another, though, looks like the data would not be wiped. I'm not sure. Worth testing.
19:16 derjohn_mob joined #gluster
19:17 partner hmm
19:18 partner perhaps i don't understand too well but that would simply fill up a brick if there is no check for the available space to fit the file in question..
19:19 partner since the truncate will succeed no matter what
19:20 gem joined #gluster
19:20 partner 1) client: gimme space for this 1TB file foo, 2) gluster: according the filename, your hash is this, go to that brick, 3) brick: sorry, i can't fit that size here, 4) gluster: let me find a spot for you and only if i cannot i will fail this request
19:20 partner overly simply put :)
19:21 panina joined #gluster
19:29 partner there is already the logic to find another place when brick is full as per min-free-disk so 3 should be covered..
19:30 partner i just lack the knowledge on how these things work on the low level so i can only throw higher level ideas (or wishes rather) for the channel to comment on
19:32 partner i know i can copy too big file somewhere and it will complain once the disk fills up, not before that.. was just hoping there would be some signalling "i'm gonna write 1 TB now, stop me now if no go"
19:34 jiffin joined #gluster
19:36 JoeJulian partner: right, but consider this. 200 sparse files exist. There is 5TB free and you want to create a 1TB volume. There's plenty of space so you're allowed to create that file on that brick.
19:36 JoeJulian Those other 200 sparse files, if fully allocated, would take up more than the remaining disk space but they're sparse so the allocation hasn't happened.
19:37 partner exactly my case yes
19:37 partner and i know i'm trying to solve a problem that kind of isn't a problem since sparse files are sparse files and i am using them..
19:37 JoeJulian If, however, there was a volume option to fallocate those files at the time they were created (ftruncated files), you would have each file guaranteed to not exceed the size of the filesystem.
19:37 partner true
19:38 JoeJulian So I'm writing my first c program in the last 5 years to see if it works the way I think it does.
19:39 partner but that would still require an option to check for fallocate to succeed beforehand..??
19:39 partner please excuse me if i talk bs, its a long day already and my head hurts, running a bit slow already here
19:40 partner i tested what happens when brick fills up and its nothing good.. luckily the volumes go to readonly but recovery is a bit difficult since the brick is full and there are no means to shrink the files there.. a single byte to write over the maximum will render everything into ro again..
19:41 partner its even more difficult since i cannot access the systems having those volumes mounted..
19:42 partner but, if the translator, being local, would do a quick check on the request (for fallocate) that would do the trick, yes.. if not.. well its too late, its there already..
19:42 partner too deep, you should have just said NO! :D
19:50 akanksha__ joined #gluster
19:51 JoeJulian partner: My thought is, when a file is created and you want it to be a certain size, your application typically creates the file and truncates it using ftruncate to the size you want. This creates a sparse file.
19:52 snehring is there a way to restrict clients from using the 'gluster --remote-host=<node>' command to manipulate gluster?
19:53 JoeJulian If, however, you had some option to say "fallocate after ftruncate", you already have the size you wanted the file truncated to (which the term in english is illogical if the file is shorter than the truncate you're not really truncating but you're making it larger) so if you immediately called fallocate, if there was room it would allocate it. If there was not it would fail.
19:53 JoeJulian snehring: remote-host has been changed (since I wrote that blog article) to only allow read-only commands
19:53 JoeJulian Still not the security I would like, but much better.
19:54 snehring ah
19:54 partner JoeJulian: yes, in this case the options are truncate -s <size> <file> and for non-sparse it would be simply dd if=/dev/zero of=<file>, latter being very heavy as an operation over the network
19:54 JoeJulian Right, fallocate is not heavy. It allocates from the inode table.
19:55 JoeJulian Would be nice if truncate offered that as an option.
19:55 partner indeed
19:55 MidlandTroy joined #gluster
19:56 JoeJulian But with the volume setting I'm proposing, you would not be able to oversubscribe your storage.
19:57 JoeJulian file a bug
19:57 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
19:57 partner oversubscribing wasn't exactly a plan ever, it went unnoticed until we noticed some alerts and figured out heck, we're using thin-provisioning (which is default)
19:58 partner and to make things worse we are using small bricks that cannot be extended by any means..
19:58 JoeJulian I hear what you're saying.
19:59 partner its so complex topic one must be a great ninja to know all these tiny things... i'm not :)
19:59 JoeJulian The only solution, at this point, would be to do sharding but that would mean copying your existing images to shard them.
19:59 JoeJulian And you would have to be on a version that supports sharding.
20:00 partner copying a 2 TB file even locally on the storage takes like 12 hours.. we're on 3.8
20:00 JoeJulian Yeah, I didn't say it was a feasible solution... ;)
20:01 partner but we must rebalance anyways to get the stuff spread across new bricks..
20:01 partner of course its live data, attached and in use all the time..
20:02 partner trust me, i'm very open to any/every ideas, we don't have a winner one at hand :)
20:02 JoeJulian And there's a potential problem with rebalance right now with active VMs.
20:02 partner i'm pretty sure many of them go readonly at least..
20:03 JoeJulian bug 1387878
20:03 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1387878 high, unspecified, ---, kdhananj, ASSIGNED , Rebalance after add bricks corrupts files
20:03 partner oh my..
20:03 JoeJulian kdhananj is on it though, so it'll get figured out.
20:04 farhorizon joined #gluster
20:07 partner well, seems i have royal flush at hand.. i still need to fix couple of split-brains as well..
20:07 prth joined #gluster
20:08 JoeJulian Feature requested (bug 1389532)
20:08 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1389532 unspecified, unspecified, ---, bugs, NEW , [RFE] Add a feature to fallocate after ftruncate
20:09 partner JoeJulian++
20:09 glusterbot partner: JoeJulian's karma is now 33
20:11 partner huge thanks once again for your support Joe!
20:16 JoeJulian I'm happy I to do what I can to help. :)
20:34 johnmilton joined #gluster
20:36 panina joined #gluster
20:39 partner puuh, getting stressed out here, i know for a fact the space will run out latest by the weekend and i don't have a solution at hand.. i'm tempted to take some manual actions to move files around but not sure how that will end up, at least it would limit the damage to selected file(s)
20:53 partner i am having a very weird plans here.. one is that i start filling up the bricks (on the disk roots, outside glusterfs territory) to make them full enough to get the min-disk-free limits to bite as i wish.. then, checking the hash ranges for bricks and files i could guide some to move around while others would stay in their place..
20:54 partner on my previous life with gluster i had 100+ of billions of files, now i just have bunch of large ones..
20:54 partner both worlds seem to have their cons.. :o
20:55 partner any idea if that would work? i have some volumes that are no longer actively written (or not at all) which wouldn't be vulnerable to this high io bug..
21:01 farhoriz_ joined #gluster
21:05 DV_ joined #gluster
21:11 mhulsman joined #gluster
21:14 JoeJulian partner: I like that idea. I would go a step further and create a cron job that shrinks that file if your free space gets below your tolerances.
21:16 partner its a hack but hey, who counts on times like these, main thing is to keep things running..
21:16 JoeJulian exactly
21:31 panina Evenin. I'm trying to find information on some settings, eg the performance.low-prio-threads setting. Is there any documentation on what it does?
21:31 JoeJulian I haven't seen any, no.
21:32 JoeJulian The whole qos aspect is being redone since this model produced some unexpected behavior.
21:33 panina Yeah, I kinda got that. I see documentation on old stuff that no longer seems to be in use.
21:33 panina Is there any information on it in any git's or something like that? Mailing lists?
21:34 panina I got the performance.low-prio-threads from RHEL's gluster+oVirt documentation, but no explanations. I'm trying to find out if it's relevant to my install.
21:36 JoeJulian It came up at Gluster Developer Summit. Let me find the right link for you.
21:42 JoeJulian https://www.youtube.com/watch?v=J0f3wV4627g
21:43 plarsen joined #gluster
21:44 panina JoeJulian thanks!
21:52 Pupeno joined #gluster
21:53 rwheeler joined #gluster
22:10 jay_ joined #gluster
22:14 jay__ joined #gluster
22:26 panina btw, I'm also wondering about the network.remote-dio setting, can't find any docs about that one either.
22:27 panina Anyone know what it is?
22:36 plarsen joined #gluster
22:42 JoeJulian This seems pretty descriptive to me. What do you find confusing?
22:42 JoeJulian Description: If enabled, in open() and creat() calls, O_DIRECT flag will be filtered at the client protocol level so server will still continue to cache the file. This works similar to NFS's behavior of O_DIRECT
22:43 JoeJulian panina: ^
22:44 hackman joined #gluster
22:44 panina Thanks JoeJulian. I'm not that versed in the terminology, and couldn't find any mention of any network settings in the docs I'm looking at.
22:45 panina Apart from ping-timeout settings, that is
22:46 JoeJulian open and creat are file functions, of course. O_DIRECT is a kernel flag that disables the kernel cache when writing to a local filesystem. It theoretically ensures that anything that was sent through write() is actually on a disk.
22:47 JoeJulian What are you attempting to change?
22:47 JoeJulian Are you trying to gain something specific, or are you just tinkering?
22:48 panina I'm looking through the settings that are recommended for RHEL HCI. I'm installing a CentOS setup of three servers, but they are slightly weaker than the RHEL directions.
22:49 panina So I'm trying to double-check if there is anything I need to adjust for my setup.
22:49 panina The RHEL docs for HCI (oVirt + GlusterFS) lists a bunch of settings for the shares, but I'm having trouble finding out what they do, and what is up-to-date.
22:50 panina Like that low-prio-threads thing, I've no idea if that's become outdated. But I'll probably just stick to the RHEL instructions.
22:50 JoeJulian I can't read the RHEL docs.
22:50 panina 1 sec
22:53 panina and off course hastebin is giving me trouble...
22:53 panina http://pastebin.com/E4aKxCyN
22:53 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
22:53 panina will do
22:56 JoeJulian I wouldn't do "cluster.data-self-heal-algorithm full", myself. Not sure why they would recommend that.
22:58 JoeJulian I would also recommend turning off client-side heals and just relying on the self-heal daemon for healing: cluster.{data,metadata,entry}-self-heal off
23:11 panina sorry for AFK, the baby woke with toothache.
23:12 JoeJulian Awe
23:13 panina Yeah, when I read that I see that it's quite odd. The volume is for VM images, so that would kinda defeat the purpose of the sharding, wouldn't it?
23:13 panina ^ the self-heal-algorithm
23:14 panina And the baby is awake again. AFK for tonight.
23:14 panina Thanks a lot for the help!
23:14 JoeJulian No, it just means that if a heal happens, rather than walking the shard and comparing sections for differences and only copying the blocks that changed, it goes ahead and copies the whole shard.
23:14 JoeJulian Can be more efficient if your network is faster than your cpus.
23:18 partner i'm so tired i cannot even figure out which part of the gfid is matched against the brick hash range?
23:21 partner this makes no sense
23:27 JoeJulian None if it. It's a hash of the filename.
23:27 JoeJulian see https://joejulian.name/blog​/dht-misses-are-expensive/
23:27 glusterbot Title: DHT misses are expensive (at joejulian.name)
23:30 partner yes.. i knew that.. i even have "gf_dm_hash.py" on my home dir..
23:30 partner i'm too tired for this stuff.. 2:30AM
23:31 partner thanks and sorry for the noise, bed ->
23:33 JoeJulian Sleep well. Get rested.
23:33 JoeJulian No mistakes.
23:42 prth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary