Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 zcourts joined #gluster
00:23 plarsen joined #gluster
00:27 Champi joined #gluster
01:14 gyadav joined #gluster
01:27 gyadav joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 gospod2 joined #gluster
02:39 Guest9038 joined #gluster
02:52 hvisage joined #gluster
03:07 DV joined #gluster
03:38 shyu joined #gluster
03:53 nbalacha joined #gluster
04:07 Guest9038 joined #gluster
04:11 riyas joined #gluster
04:12 gyadav joined #gluster
04:13 poornima joined #gluster
04:17 dominicpg joined #gluster
04:32 rastar joined #gluster
04:44 Shu6h3ndu joined #gluster
04:49 msvbhat joined #gluster
04:57 skumar joined #gluster
04:59 kdhananjay joined #gluster
05:14 atinm joined #gluster
05:21 ndarshan joined #gluster
05:37 hgowtham joined #gluster
05:42 nbalacha joined #gluster
05:49 baojg joined #gluster
05:57 buvanesh_kumar joined #gluster
05:59 rafi joined #gluster
06:03 Saravanakmr joined #gluster
06:10 skoduri joined #gluster
06:10 buvanesh_kumar joined #gluster
06:10 nbalacha joined #gluster
06:19 Prasad joined #gluster
06:20 susant joined #gluster
06:21 Guest9038 joined #gluster
06:23 knishida joined #gluster
06:28 nbalacha_ joined #gluster
06:29 knishida joined #gluster
06:30 knishida Hi, Gluster community.
06:31 knishida We are using gluster with 5TB data, 40 million files and 10 million directories.
06:31 knishida Because too many files, replace-brick operation requires 6 months.
06:33 knishida Then I have a question, can I use rsync command with -X to replace brick?
06:34 JoeJulian I've often wondered about this idea. Why would rsync take less time?
06:35 knishida I did it before with 100 million files. It takes less than 1 week.
06:37 rastar joined #gluster
06:38 owlbot joined #gluster
06:38 JoeJulian Hmm, I wonder what the difference is. It's all just reading and writing over a network. <shrug>
06:39 JoeJulian So it's possible if the xattrs don't change during the copy I suppose. Is this not a replicated volume?
06:41 knishida It is replicated-distributed volume
06:42 knishida https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick
06:42 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
06:42 knishida Im thinking the way based on "Replacing brick in Replicate/Distributed Replicate volumes"
06:43 knishida 1st. kill peer's daemon.   2nd. rsync data from old brick to new brick.
06:44 knishida 3rd. umount old brick and mount new brick as same path.
06:45 knishida 4th. start daemon
06:45 knishida From gluster cluster, it seems like same brick ups.
06:46 nbalacha_ knishida, you will need to preserve the xattrs and hardlinks
06:47 knishida xattrs must be copied by "rsync -X" option
06:47 knishida hardlinks are ...
06:47 knishida no idea at least now
06:48 ahino joined #gluster
06:49 JoeJulian If you're going to do that way, "tar -cC /old_brick_path | tar -xC /new_brick_path" would probably be even faster.
06:49 JoeJulian And with the brick having been killed, it should be safe.
06:50 yyamamoto joined #gluster
06:51 knishida I didn't know tar retains xattr. Thank you!
06:54 knishida Now I think I can do it. Thank you for helpful.
06:55 rafi1 joined #gluster
07:05 ndarshan joined #gluster
07:36 major joined #gluster
07:37 msvbhat joined #gluster
07:38 armyriad joined #gluster
07:40 armyriad joined #gluster
07:41 fsimonce joined #gluster
07:44 ankitr joined #gluster
07:51 ndarshan joined #gluster
07:55 JoeJulian You're welcome, knishida
07:58 john_doe joined #gluster
08:04 aardbolreiziger joined #gluster
08:05 FuzzyVeg joined #gluster
08:14 ivan_rossi joined #gluster
08:31 ndarshan joined #gluster
08:32 aardbolreiziger joined #gluster
08:33 msvbhat joined #gluster
08:40 p7mo joined #gluster
08:40 bEsTiAn Hi (again), is there a way to delegate rights to one specific user, to perform a snapshot, and to consolidate it later, on a specific volume ? (except using sudoers file)
08:58 ndarshan joined #gluster
09:00 _KaszpiR_ joined #gluster
09:04 gyadav joined #gluster
09:05 ankitr joined #gluster
09:14 Guest9038 joined #gluster
09:16 ndarshan joined #gluster
09:16 msvbhat joined #gluster
09:19 JoeJulian bEsTiAn: I'm afraid not.
09:24 apandey joined #gluster
09:46 msvbhat joined #gluster
10:17 msvbhat joined #gluster
10:20 ThHirsch joined #gluster
10:56 bEsTiAn hello, there is no volume rename option on RHEL Gluster ?
11:00 susant joined #gluster
11:01 _KaszpiR_ joined #gluster
11:03 ahino joined #gluster
11:06 Saravanakmr joined #gluster
11:38 WebertRLZ joined #gluster
11:44 baojg joined #gluster
12:01 shyu joined #gluster
12:05 hgowtham joined #gluster
12:16 decayofmind joined #gluster
12:25 davids joined #gluster
12:29 davids The last time I used the 3.10 release of glusterfs with nfs-ganesha 2.4.5. In the folder /usr/libexec/ganesha I had several scripts for dynamically export or unexport a share. But now in glusterfs 3.12 with nfs-ganesha 2.5 this scripts are not available. Is that because of storhaug
12:29 davids ???
12:32 cloph no idea what storhaug is - but what would you need the scripts for? specifying the ganesha exports should be same as before, right?
12:37 davids I had the scripts "create-export-ganesha.sh" and "dbus-send.sh". With this scripts I can dynamically create and export a share without restarting nfs-ganesha
12:37 davids Specifying an export is the same as before
12:40 cloph looking at the repo: https://bugzilla.redhat.com/show_bug.cgi?id=1418417
12:40 glusterbot Bug 1418417: unspecified, unspecified, ---, kkeithle, CLOSED CURRENTRELEASE, packaging: remove glusterfs-ganesha subpackage
12:40 cloph yeah, it is about storehaug (but still no idea what that is :-))
12:41 mohan joined #gluster
12:43 davids I have also no idea what storhaug really is
12:48 davids According to this patch: https://review.gluster.org/#/c/16506/ this scripts were deleted
12:48 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:50 susant joined #gluster
12:53 gyadav joined #gluster
13:01 koolfy joined #gluster
13:10 coredumb joined #gluster
13:10 coredumb Hello
13:10 glusterbot coredumb: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offe
13:10 Guest9038 joined #gluster
13:11 coredumb What steps could I take to optimize a setup to perform decently for hosting Git repos ?
13:16 nbalacha_ joined #gluster
13:22 Shu6h3ndu joined #gluster
13:34 _KaszpiR_ joined #gluster
13:40 legreffier joined #gluster
14:02 arpu joined #gluster
14:05 ankitr joined #gluster
14:24 bEsTiAn joined #gluster
14:30 [fre] Guys...
14:31 [fre] I'm just trying to extend a volume with a new brick on another host. seems to fail again and again.
14:32 [fre] gluster volume add-brick log replica 3 host.prd.lan:/rhgs/brick-log/brick-log
14:32 [fre] I'm going from a 2-replica volume to a 3-replica.
14:32 DV joined #gluster
14:33 [fre] @JoeJulian... does it ring a bell? Reading one of your related blog-entries currently. Any suggestion what to look at?
14:37 _KaszpiR_ joined #gluster
14:39 ron-slc joined #gluster
14:45 saybeano joined #gluster
14:58 susant joined #gluster
15:06 [fre] joined #gluster
15:10 nbalacha_ joined #gluster
15:13 [fre] Failed to get the port number for remote subvolume
15:13 [fre] failed to set the volume [Permission denied]
15:14 [fre] failed to get 'process-uuid' from reply dict [Invalid argument]
15:14 [fre] SETVOLUME on remote-host failed [Permission denied]
15:14 [fre] I don't get it.
15:18 dominicpg joined #gluster
15:22 ankitr joined #gluster
15:25 gunix joined #gluster
15:25 gunix hey, with gluster is the replication sync or async?
15:26 cloph geo-replication is async, but replica is sync
15:28 major joined #gluster
15:29 gunix gluster volume create gv0 replica 2 node01.mydomain.net:/export/sdb1/brick node02.mydomain.net:/export/sdb1/brick
15:29 gunix so this is sync, like galera
15:29 gunix meaning that if data gets written to a host, it is written to both hosts at the same time, and the service that reads from the 2nd node will get the new data
15:33 susant joined #gluster
15:47 gunix yes, that was a question
15:50 siel joined #gluster
15:51 cloph punctuation helps a lot in those cases.
15:51 cloph "Yes" is the answer.
16:12 gunix thank you! took me 18 minutes to figure my punctuation was bad.  :D
16:16 plarsen joined #gluster
16:18 kjackal joined #gluster
16:19 JoeJulian [fre]: "permission denied" eh? perhaps selinux?
16:20 rafi joined #gluster
16:21 decayofmind joined #gluster
16:23 JoeJulian coredumb: git repo /hosting/ shouldn't really require much of anything. A hosted repo is pretty much just the objects (eg. .git/objects ) and should lend itself pretty efficiently to clustered storage.
16:26 ivan_rossi left #gluster
16:50 weller joined #gluster
16:53 weller hi, I have a 2-node gluster with replica 2, and no optimization so far. the machines run centos 7, and xfs file-system on 10 HDDs in raid6 config. Without special tuning: what is the expected io-performance for lets say a few gigabytes of 1MB files?
16:54 weller the nodes are connected with 10G network
16:56 msvbhat joined #gluster
16:58 weller should I have more than 15MByte/s transfer-rates?
17:06 coredumb JoeJulian: for small repos it's much of an issue, but for big ones like kernel clone, it can takes 20x more times than local disk
17:09 coredumb lll
17:18 prasanth joined #gluster
17:28 saltsa joined #gluster
17:31 Shu6h3ndu joined #gluster
17:31 ahino joined #gluster
17:35 msvbhat joined #gluster
17:56 rastar joined #gluster
18:26 armyriad joined #gluster
18:29 Teraii_ joined #gluster
18:29 jocke- joined #gluster
18:31 edong23_ joined #gluster
18:31 yawkat` joined #gluster
18:32 d-fence__ joined #gluster
18:32 nigelb joined #gluster
18:36 anthony25_ joined #gluster
18:37 msvbhat joined #gluster
18:37 MrAbaddon joined #gluster
18:42 victori joined #gluster
18:42 cholcombe joined #gluster
18:43 nadley joined #gluster
18:44 colm4 joined #gluster
18:44 samikshan joined #gluster
18:47 decayofmind joined #gluster
18:49 msvbhat joined #gluster
18:49 armyriad joined #gluster
18:52 Peppard joined #gluster
18:53 decayofmind joined #gluster
18:57 _KaszpiR_ joined #gluster
19:21 weller joined #gluster
20:17 weller are there known performance issues with using gluster in combination with samba? e.g. would one expect to have 10x faster throughput on scp from/to a mounted gluster share, compared to copying the same folder/files (2k files, 10KB each) using a samba share (vfs_gluster module)?
20:37 gunix i created a volume shared across 2 nodes using this tutorial:http://gluster.readthedocs.io/en/latest/Install-Guide/Configure/
20:37 glusterbot Title: Configure - Gluster Docs (at gluster.readthedocs.io)
20:37 gunix however, it is not replicating
20:37 gunix AFAIK replication is called "heal"
20:38 gunix i echoed text to a file, but it doesn't get detected as a change and doesn't get replicated to the other side
20:38 gunix even if i use gluster volume heal httpglustervol full
20:38 gunix what am i doing wrong ?
20:39 cloph heal is the fixing of replication after an error szenario (aka if regular replication could not happen because a brick was down)
20:39 cloph you fail to write how you did setup the volume (whether it is a replicaing one to begin with) for example.
20:40 gunix i first created it and than i started it
20:41 cloph you fail to write how you did setup the volume (whether it is a replicaing one to begin with) for example.
20:42 gunix cloph: https://bpaste.net/show/f40475bcda8a
20:42 glusterbot Title: show at bpaste (at bpaste.net)
20:42 gunix test file is only on server 1
20:42 cloph you must not touch the brick directories manually
20:43 cloph you can only access the volume using a mount of the volume.
20:43 gunix i will give you paste of mounts, wait
20:45 gunix cloph: https://bpaste.net/show/fa6eb57fdfb1
20:45 glusterbot Title: show at bpaste (at bpaste.net)
20:45 gunix also, i removed that test file
20:45 gunix am i missing something? do i need to create some secondary filesystems within the bricks and mount them somehow?
20:46 cloph where in that output is the mount of the gluster volume?
20:49 gunix AFAIK i create xfs on the filesystem, mount the filesystem and that create the volume within it
20:49 gunix am i missing something?
20:49 cloph you are missing that bricks are not the same as the volume.
20:50 cloph volume consists of bricks, but to use the volume, you need to mount the *volume* and must not access the bricks yourself.
20:51 gunix ok so the bricks are these two:
20:51 gunix Brick1: wordpress-01:/mnt/vdb1/brick
20:51 gunix Brick2: wordpress-02:/mnt/vdb1/brick
20:51 gunix the volume is httpglustervol
20:52 cloph and you need to mount httpglustervol and access the volume using whaver mount point you choose for that.
20:52 gunix i think i found it
20:52 gunix HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0
20:52 gunix this is for fstab
20:53 gunix i guess this would happen over network but in my case i will mount them with 127.0.0.1
20:54 cloph that is errorprone, as you cannot reliably tell when local gluster server is ready/can proide the volume info.
20:54 cloph better to specify the other host
20:54 cloph you need network between the two anyway, as otherwise  replication cannot work.
20:55 gunix but that would create a lot of network overhead. if a write request comes to server1, i will write via network on server2, and server2 will replicate the info on server1
20:55 gunix instead of just writing locally and sharing the data only once
20:55 cloph no
20:56 gunix no?
20:56 cloph if you use replica, it will *write* to both servers anyway.
20:56 gunix also, i don't see how this will provide any high availability. i have 2 servers. if one goes down, both don't work, since one is writing/reading from the other
20:57 cloph And whether it reads from server1 or server2 depends on which replies first/unless you *force* it to read locally.
20:57 gunix the point is to create a HA POC for a network server
20:57 cloph But all that is independend from where you get the volume information from (what  server you specify in the mount statement)
20:57 plarsen joined #gluster
20:58 gunix so if on wordpress-02 i add wordpress-01:/volumename to fstab, and wordpress-01 goes down, wordpress-02 will still be able to work?
20:58 cloph with two servers you won't have real HA -- if one goes down you cannot tell whether it is due to network split or actual downtime  - you can configure it so that server1 (the first brick) is enough to be up, but that's it.
20:58 cloph Better to have odd number of peers.
20:58 gunix ok, that is not a problem, i can add another server
20:59 _KaszpiR_ joined #gluster
20:59 gunix i just wanted to see it work first before doing that
20:59 gunix it's a poc anyway
20:59 gunix prod will be on 3 servers, so they have quorum
20:59 gunix i have no ideea how to configure quorum on gluster :D
20:59 gunix ibu i
21:00 gunix i will figure that out, first i want to see it working, with one server going down wihtout impacting the other one
21:00 gunix is this not possible wihtout quorum?
21:01 cloph that is impossible - you need to tell what should happen.
21:01 cloph when both servers are back available again, and both have the same file changed, gluster cannot tell by itself which copy it should keep.
21:02 cloph You have to tell it/configure a default action or handle it with quorum.
21:02 gunix ok, than quorum is easier
21:02 gunix i will add a 3rd server
21:03 gunix but if it has quorum, than why can't i add 127.0.0.1 to localhost? doesn't gluster stop providing data if it doesn't see a 2nd node? AFAIK on galera, if a node isn't in the n+1 group, it stops
21:04 cloph that statement doesn't make sense. 127.0.0.1 is localhost by definition, so you cannot "add 127.0.0.1 to localhost"
21:04 gunix *also, sorry for spamming questions. you know your stuff and you are helping me alot :D if you have something else to do, i will continue my research without spamming you :D
21:04 gunix sorry, i will rephrase
21:04 gunix but if it has quorum, than why can't i add 127.0.0.1 to ***FSTAB***? doesn't gluster stop providing data if it doesn't see a 2nd node? AFAIK on galera, if a node isn't in the n+1 group, it stops
21:04 cloph and of course you can grab volume info from localhost, just don't expect your localhost fstab entry to work due to timing issues (attempt to mount before local gluster server is ready)
21:05 gunix oooh yea that makes sense
21:05 gunix it won't boot
21:05 gunix i have to manually mount after boot
21:06 Teraii joined #gluster
21:15 gunix cloph: https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/876648
21:15 glusterbot Title: Bug #876648 “Unable to mount local glusterfs volume at boot” : Bugs : glusterfs package : Ubuntu (at bugs.launchpad.net)
21:15 gunix this has been reported as a bug
21:15 gunix check the last comment
21:16 gunix the guy created a systemd setting that solves the issue
21:17 cloph didn't work for me, as while the unit is started, that doesn't mean glusterd is ready.
21:17 cloph so from systemd's point of view the unit is launched, but from mount it doesn't work yet.
21:19 gunix :(
21:20 gunix it seems to be easier to just have gluster export a nfs with a virtual ip
21:20 gunix and have wordpress on other nodes
21:20 gunix *btw wordpress is just use as a POC here
21:21 zLuke joined #gluster
21:21 cloph or just specify the other server. then no issues with local timing, as other server should be up and can provide the info about the volume
21:22 cloph btw: https://joejulian.name/blog/dht-misses-are-expensive/
21:22 glusterbot Title: DHT misses are expensive (at joejulian.name)
21:22 zLuke I started a rebalance and it failed on multiple nodes, how can I restart it?
21:29 gunix cloph: so it uses network to provide information about the volume, but the files are accessed from the local folder?
21:30 cloph not necessarily
21:32 zLuke nevermind, I just tried to restart it and it seems to be working, before it kept telling me a rebalance was still in progress
21:33 cloph http://lists.gluster.org/pipermail/gluster-users/2015-June/022321.html
21:33 glusterbot Title: [Gluster-users] reading from local replica? (at lists.gluster.org)
21:40 hvisage joined #gluster
21:41 gunix cloph: so there should be a default choose-local somewhere
21:43 gunix and as far as i understand, this means that data about all nodes gets replicated to all nodes. so if 3 nodes have a volume and node 2 mounted from node 3, and node 3 is dead, node 2 will try to get the data from node 1 or node 2
21:43 gunix i ... guess.
21:44 MrAbaddon joined #gluster
21:52 weller joined #gluster
22:00 gunix cloph: i mounted the volumes as you suggested and replication works. i will add more features with time. thank you for your help
22:12 weller if there is someone using gluster in combination with samba: what is your 'small file' performance? I have read through the web, but unfortunately this seems tough to improve... I would like to know if performance is as expected, or can be improved...
22:21 zcourts joined #gluster
23:14 MadPsy joined #gluster
23:14 MadPsy joined #gluster
23:16 major joined #gluster
23:18 swebb joined #gluster
23:53 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary