Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 dgandhi joined #gluster
00:45 pdrakeweb joined #gluster
00:51 cliluw joined #gluster
00:56 spcmastertim joined #gluster
01:02 alex3 joined #gluster
01:02 alex3 left #gluster
01:12 DV joined #gluster
01:14 trav408 left #gluster
01:39 Lee1092 joined #gluster
01:45 shyam joined #gluster
01:53 haomaiwa_ joined #gluster
01:54 dlambrig joined #gluster
01:57 dlambrig joined #gluster
02:05 nangthang joined #gluster
02:10 haomaiwa_ joined #gluster
02:10 owlbot joined #gluster
02:16 baojg joined #gluster
02:22 plarsen joined #gluster
02:30 dlambrig joined #gluster
02:42 rafi joined #gluster
02:50 pppp joined #gluster
03:03 chirino joined #gluster
03:09 rafi joined #gluster
03:10 haomaiwa_ joined #gluster
03:19 haomaiw__ joined #gluster
03:22 skoduri joined #gluster
03:22 vmallika joined #gluster
03:34 TheSeven joined #gluster
03:43 rafi joined #gluster
03:44 vimal joined #gluster
03:46 nishanth joined #gluster
03:46 gildub joined #gluster
03:48 sakshi joined #gluster
03:50 neha joined #gluster
03:54 baojg joined #gluster
03:55 rafi joined #gluster
03:56 kanagaraj joined #gluster
03:58 kkeithley1 joined #gluster
04:02 dlambrig joined #gluster
04:02 rafi joined #gluster
04:05 autoditac_ joined #gluster
04:10 haomaiwa_ joined #gluster
04:11 atinm joined #gluster
04:19 rjoseph joined #gluster
04:20 ramteid joined #gluster
04:26 ashiq joined #gluster
04:33 gem joined #gluster
04:37 Manikandan joined #gluster
04:38 baojg joined #gluster
04:42 ppai joined #gluster
04:48 yazhini joined #gluster
04:50 ndarshan joined #gluster
04:55 shubhendu joined #gluster
05:01 DV joined #gluster
05:03 meghanam joined #gluster
05:07 vimal joined #gluster
05:08 neha joined #gluster
05:10 haomaiwa_ joined #gluster
05:12 vimal joined #gluster
05:21 kotreshhr joined #gluster
05:23 hgowtham joined #gluster
05:28 dusmant joined #gluster
05:29 cppking joined #gluster
05:29 cppking hello guys,  on this post https://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/ ,    How samba-vfs-glusterfs works ?
05:30 cppking Will it mount the glusterfs locallly then share it through samba protocal ?
05:31 dusmant joined #gluster
05:31 anoopcs cppking, No. Samba uses libgfapi to access gluster volumes. Hope the following link helps: https://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
05:35 cppking anoopcs:  thx a lot , another question ,   why can't I use  libgfapi-python https://github.com/gluster/libgfapi-python on CentOS6.5
05:35 glusterbot Title: gluster/libgfapi-python · GitHub (at github.com)
05:35 anoopcs cppking, You mean to use python-bindings for api in Samba?
05:37 harish joined #gluster
05:38 cppking anoopcs:  no , use python-bindings to control gluster volume
05:41 anoopcs cppking, GlusterFS API(libgfapi) is not for managing gluster volumes. You can use api to write applications.
05:42 anoopcs hchiramm_, Correct me if I am wrong ^^ ?
05:43 hchiramm_ http://review.gluster.org/#/admin/projects/libgfapi-python cppking
05:43 glusterbot Title: Gerrit Code Review (at review.gluster.org)
05:44 hchiramm_ the mentioned project in gerrit and github is actually mirror of the same..
05:44 rafi joined #gluster
05:44 hchiramm_ well, you can try python bindings to talk with libgfapi
05:44 hchiramm_ most of the api are ported and available as python apis
05:45 hchiramm_ if anything missing, please let us know.
05:45 TheCthulhu joined #gluster
05:48 skoduri joined #gluster
05:51 anil joined #gluster
05:53 kdhananjay joined #gluster
05:56 cppking If there will be a  ISCSI-vfs-glusterfs plugin ?
05:57 ndevos cppking: there already is one, see https://apps.fedoraproject.org/packages/iscsi-initiator-utils
05:58 glusterbot Title: Package iscsi-initiator-utils (at apps.fedoraproject.org)
05:59 cppking thx a lot,  you guys are awesome
06:00 haomaiwang joined #gluster
06:00 Bhaskarakiran joined #gluster
06:00 ndevos actually, see https://apps.fedoraproject.org/packages/scsi-target-utils-gluster
06:00 glusterbot Title: Package scsi-target-utils-gluster (at apps.fedoraproject.org)
06:00 ndevos I'm not sure why there are no seperate packages for Fedora 22/23, maybe it is all in the main package now?
06:01 ndevos oh, no, its just that there is no version in testing, ignore that ^ :)
06:02 arcolife joined #gluster
06:02 vmallika joined #gluster
06:03 shubhendu joined #gluster
06:03 raghu joined #gluster
06:04 anoopcs cppking, Whatever the C API provides, those can be done via python bindings too.
06:09 Guest4573 joined #gluster
06:12 cppking anoopcs:  https://bpaste.net/show/ad49134c8e77 Can you help me to check it?
06:12 glusterbot Title: show at bpaste (at bpaste.net)
06:13 haomaiwa_ joined #gluster
06:13 anoopcs cppking, I am not familiar with python bindings.
06:13 anoopcs cppking, ppai, Can you help here?
06:13 anoopcs cppking, ppai can help you out.
06:13 ppai hello
06:13 glusterbot ppai: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:14 cppking hchiramm_:  https://bpaste.net/show/ad49134c8e77 Can you help me ?
06:14 glusterbot Title: show at bpaste (at bpaste.net)
06:14 jwd joined #gluster
06:14 ppai seems like you have a an older version of ctypes
06:15 hchiramm_ yep.. looks like that
06:15 ppai you're using python 2.6 which isn't tested yet
06:16 kshlm joined #gluster
06:17 ppai cppking, do you have ctypes installed via pip ? or externally
06:18 cppking ppai:  CentOS6.5 original
06:18 ppai cppking, the ctypes bundled with py26 does seem to have that named arg though: https://docs.python.org/2.6/library/ctypes.html#ctypes.CDLL
06:18 glusterbot Title: 15.15. ctypes — A foreign function library for Python Python v2.6.9 documentation (at docs.python.org)
06:18 cppking I've try to update ctypes through pip, but It doesn't work
06:19 ppai cppking, ah i see, you shouldn't. Try this: "sudo pip uninstall ctypes"
06:19 skoduri joined #gluster
06:19 poornimag joined #gluster
06:20 cppking ppai:  then how to upgrade the ctypes module
06:20 ppai cppking, check if you have multiple versions of ctypes installed: "updatedb; locate ctypes"
06:20 ppai cppking, ctypes is bundled as standard library in python 2.6 itself
06:23 ppai cppking, you could also check the location of loaded ctypes module from python interpreter
06:23 anonymus joined #gluster
06:23 ppai >>> import ctypes
06:23 ppai >>> print ctypes.__file__
06:24 baojg joined #gluster
06:27 cppking '/usr/lib64/python2.6/ctypes/__init__.pyc'
06:28 jtux joined #gluster
06:28 cppking ppai:  now another problem
06:28 cppking https://bpaste.net/show/eba01bda4098
06:28 glusterbot Title: show at bpaste (at bpaste.net)
06:29 ppai cppking, that's a known one, the fix in under review: http://review.gluster.org/#/c/11644/
06:29 glusterbot Title: Gerrit Code Review (at review.gluster.org)
06:29 haomaiwa_ joined #gluster
06:30 ppai cppking, is your initial issue resolved ?
06:30 maveric_amitc_ joined #gluster
06:31 cppking ppai:  thx a lot , I think this problem is solved
06:31 cppking but another question
06:32 cppking https://apps.fedoraproject.org/packages/scsi-target-utils-gluster/  Will I successfully build this plugin on CentOS6.5
06:32 glusterbot Title: Package scsi-target-utils-gluster (at apps.fedoraproject.org)
06:34 ppai cppking, well that project is new to me as well, so I don't know.
06:35 ppai ndevos know anyone working on or used scsi-target-utils-gluster  that you could point cppking to ?
06:35 shubhendu joined #gluster
06:35 ndevos ppai, cppking: dlambrig did the gluster integration there
06:36 ndevos cppking: rebuilding probably only depends on the version of glusterfs-api-devel package that you have
06:36 cppking ppai:  thx a lot
06:37 hchiramm_ ppai++ , reviewing that patch :)
06:37 glusterbot hchiramm_: ppai's karma is now 3
06:37 ppai hchiramm_, I just added you as reviewer :)
06:37 cppking you guys are so kind , I'd appreciate
06:38 Lee- joined #gluster
06:38 hchiramm_ cppking++ , please feel free to revert
06:38 glusterbot hchiramm_: cppking's karma is now 1
06:39 ndevos cppking: I dont have a centos6.5 handy, but you can do this:
06:39 ndevos wget https://kojipkgs.fedoraproject.org//packages/scsi-target-utils/1.0.55/3.fc23/src/scsi-target-utils-1.0.55-3.fc23.src.rpm
06:39 ndevos rpmbuild --rebuild scsi-target-utils-1.0.55-3.fc23.src.rpm
06:40 ndevos
06:40 hchiramm_ cppking, may be u can patch ssize_t issue by the fix mentioned at http://review.gluster.org/#/c/11644/
06:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
06:40 hchiramm_ that should allow you to go ahead with the python bindings
06:41 ndevos cppking: maybe have a look at the CentOS Storage SIG, the scsi-target-utils package with gluster+ceph support would fit in their plans
06:42 cppking I'll try my best
06:43 neha joined #gluster
06:52 haomaiwa_ joined #gluster
06:57 dusmant joined #gluster
07:00 cppking mount -t glusterfs -o  direct-io-mode=true,   what's the meaning of this option,  Is there a more detailed doc than http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Setting%20Up%20Clients/#mounting-volumes
07:00 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.org)
07:06 papamoose1 joined #gluster
07:07 papamoose1 joined #gluster
07:08 cppking joined #gluster
07:10 haomaiwa_ joined #gluster
07:10 nangthang joined #gluster
07:15 hchiramm_ http://www.jamescoyle.net/how-to/559-glusterfs-performance-tuning cppking could help u
07:21 johndescs joined #gluster
07:23 skoduri joined #gluster
07:25 autoditac_ joined #gluster
07:26 johndescs hello, yesterday I tackled a strange problem where "link" files (those with sticky bit) had the file's permission + sticky and not only sticky (mode 1000), leading to directory listings showing duplicates with exact same name/inode
07:26 johndescs assuming there were no real file with sticky it was easy to fix, but if anyone had an idea about what went wrong I would be happy to know :)
07:27 johndescs running 3.6.4 currently but went through upgrades starting with 3.4
07:28 haomaiw__ joined #gluster
07:33 autoditac__ joined #gluster
07:36 autoditac__ joined #gluster
07:38 fsimonce joined #gluster
07:41 autoditac_ joined #gluster
07:43 poornimag joined #gluster
07:45 johndescs one other thing, I would like to reset quota for a directory since they are completely nonsense (various problems + upgrade to quota daemon) but I've no clue how to do this, even playing with quota attrs now that there is a separate daemon
07:47 Alex31 joined #gluster
07:47 ctria joined #gluster
07:47 Alex31 rastar: hi :)
07:53 tanuck joined #gluster
07:53 tanuck left #gluster
07:54 cppking joined #gluster
07:59 Norky joined #gluster
08:10 haomaiwang joined #gluster
08:11 anonymus|2 joined #gluster
08:15 cppking joined #gluster
08:15 calavera joined #gluster
08:20 cppking joined #gluster
08:31 atalur joined #gluster
08:31 karnan joined #gluster
08:37 cppking joined #gluster
08:39 karnan joined #gluster
08:48 deniszh joined #gluster
08:59 spalai joined #gluster
09:01 rastar Alex31: hello
09:02 Alex31 rastar: hello :)
09:02 rastar Alex31: I read your previous messages.. Please don't access/modify data through brick paths.. In your example /mnt/GFSNEWVOL is a brick
09:03 rastar Alex31: gluster filesystem should be accessed/modified only through mount(FUSE/NFS/SMB)
09:04 rastar Alex31: I was interested to know if change smb.conf params like kernel share modes, posix locking and kernel oplocks helped. Also how about io-cache set to off.
09:05 Alex31 rastar: the performance are really better with NFS
09:05 Alex31 rastar: but I have to use  this :posix locking = no
09:05 Alex31 so I mount the brick by NFS and i share it over samba
09:05 Alex31 with this option
09:06 Alex31 now, I can oen a file in 2sec
09:06 jordie joined #gluster
09:06 Alex31 i can open*
09:06 rastar Alex31: but accessing a brick directly will corrupt your glusterfs
09:07 Alex31 rastar: yes, I have tested and It was really a bad idea :D
09:07 rastar Alex31:  :)
09:08 rastar Alex31: anyways, so you say opening a file takes 2 seconds from Fuse and 10 secs using SMB
09:08 rastar Alex31: we should look at reducing that difference
09:09 Alex31 rastar:  first test: I have mount the brick with the command: mount -t glusterfs [...]
09:09 Alex31 I have share this volume by samba
09:09 Alex31 I have try different option but, it was slow directly on the linux
09:09 Alex31 different samba option I mean
09:10 Alex31 Second test:
09:10 haomaiwa_ joined #gluster
09:10 Alex31 I have mount the brick with the command mout -t nfs [.... ]
09:10 Alex31 shared this volume by samba and play the different samba option
09:10 Alex31 especially posix locking
09:11 Alex31 if posix locking = yes => Open a file from windows take moreless 8 seconds
09:11 Alex31 if posix locking = no => Open the same file take 2 sec
09:12 Alex31 for me, NFS is really better for the moment for reduce time for accessing file
09:17 Alex31 rastar: now, I have to work on the small size files
09:18 Alex31 copying a file which size is less of 50Ko is veryyyy long
09:26 rastar Alex31: even after switching off io-cache?
09:27 Alex31 rastar: it seem the same ...
09:29 rastar Alex31: that is very odd
09:29 rastar Alex31: you are copying from gluster to windows local or the opposite way?
09:30 Alex31 from windows to gluster
09:30 Alex31 from gluster to windows is really fast
09:31 rastar Alex31: ok
09:31 rastar Alex31: One single file?
09:31 Alex31 rastar: no, for test, I copy 5 files ( 134Ko, 23Ko, 20Ko, 22Ko, 17Ko)....
09:32 Alex31 rastar: the time elapsed for the copy is moreless 7 secondes
09:32 Alex31 seconds*
09:33 rastar Alex31: as a share option in smb.conf add "case sensitive = yes" and try again after restarting smb
09:33 dusmant joined #gluster
09:33 rastar Alex31: there is a bug in smb-gluster integration which is fixed in glusterfs 3.7
09:35 Alex31 rastar: same time
09:37 rastar Alex31: http://gluster-documentations.readthedocs.org/en/latest/Administrator%20Guide/Monitoring%20Workload/
09:37 glusterbot Title: Monitoring Workload - Gluster Docs (at gluster-documentations.readthedocs.org)
09:37 calavera joined #gluster
09:38 rastar Alex31: ^^ this is a how-to on profiling a gluster volume..If you could enable profile and run your same test case and give the profile data we change see what is taking so much time.
09:40 Alex31 rastar: ok, let's go :)
09:43 Alex31 rastar: ok, i'm ready. you want a test with and without the case sensitive option ?
09:44 rastar Alex31: Yes please :)
09:46 dlambrig joined #gluster
09:47 Alex31 rastar:  this a copy of 5 small files with the case sensitive activate on the samba share : http://fpaste.org/257062/14400639/
09:47 glusterbot Title: #257062 Fedora Project Pastebin (at fpaste.org)
09:49 Alex31 rastar: now, i have change case sensitive to "no", reboot the server,  copy the 5 small files :
09:50 Alex31 rastar:  http://fpaste.org/257064/06422014/
09:50 glusterbot Title: #257064 Fedora Project Pastebin (at fpaste.org)
09:51 shubhendu joined #gluster
09:51 skoduri joined #gluster
09:54 rastar Alex31: http://fpaste.org/257070/64488144/
09:54 glusterbot Title: #257070 Fedora Project Pastebin (at fpaste.org)
09:55 rastar I copied from one brick each for both the cases
09:56 jwd joined #gluster
09:57 nishanth joined #gluster
09:57 rastar Alex31: one important note: when you create a replica 4 , you are splitting your bandwidth by 4. so If your Samba server machine has a 400MB/s connection to a brick machine. Your writes will now occur at 100MB/s
10:00 rastar Alex31:  for comparison, can you create a new vol with one single brick. like "gluster vol create GFSVOL2 drbd02:/mnt/GFSVOL2 force" and try the same. No need to profile. Just run your test on GFSVOL2 after configuring Samba.
10:02 elico joined #gluster
10:05 Alex31 rastar: I don't understand why i have this error : volume create: GFSVOL2: failed: Staging failed on drbd02. Error: Host drbd01 not connected
10:06 Alex31 from drbd01, peer status  say all nodes are connected
10:07 Alex31 12:07:08 root@drbd01:/mnt#  gluster volume create GFSVOL5 drbd01:/mnt/GFSVOL5 force
10:07 Alex31 volume create: GFSVOL5: failed: Staging failed on drbd02. Error: Host drbd01 not connected
10:10 haomaiwa_ joined #gluster
10:11 Alex31 rastar: ooo, I was having a problem with one node
10:11 cppking joined #gluster
10:12 kotreshhr joined #gluster
10:17 harish joined #gluster
10:19 autoditac_ joined #gluster
10:20 autoditac__ joined #gluster
10:21 rastar Alex31: could you make it work?
10:27 s19n joined #gluster
10:30 Bhaskarakiran joined #gluster
10:34 baojg joined #gluster
10:37 shyam joined #gluster
10:37 ira joined #gluster
10:41 SeerKan joined #gluster
10:41 SeerKan Hi guys
10:41 SeerKan Can anybody help me understand how the failover process works in the native fuse client ?
10:43 SeerKan If I mount a volume from server1 with server2 as backup, I understand that once server1 is down it will use server2 automatically. But what happens when server1 is back ? will it start automatically use server1 even if it doesn't have the latest data or keep using server2 until it goes down and then go back to server1 ?
10:45 kdhananjay joined #gluster
10:48 jcastill1 joined #gluster
10:49 skoduri joined #gluster
10:49 LebedevRI joined #gluster
10:51 rastar Alex31: I profile info you gave shows that latency on brick side is very less..on order of 42 milli seconds..
10:52 rastar Alex31: it means client the client latency is high or network latency is high..I will look into it more.
10:53 rastar Alex31: for reference, what is the ping time from samba machine to any brick machine?
10:53 rastar Alex31: I will be afk for a while
10:53 jcastillo joined #gluster
10:59 kanagaraj joined #gluster
11:06 jcastill1 joined #gluster
11:07 meghanam joined #gluster
11:11 dusmant joined #gluster
11:11 gem joined #gluster
11:11 calavera joined #gluster
11:12 jcastillo joined #gluster
11:21 Bhaskarakiran joined #gluster
11:23 JonathanD joined #gluster
11:26 jrm16020 joined #gluster
11:31 jrm16020_ joined #gluster
11:34 elico joined #gluster
11:34 spalai left #gluster
11:36 ashiq joined #gluster
11:38 Alex31 rastar_afk: be back ... for the moment, I have one of node wich see the other member of the cluster disconnected
11:38 dusmant joined #gluster
11:43 yazhini left #gluster
11:44 jrm16020 joined #gluster
11:49 jrm16020 joined #gluster
11:56 plarsen joined #gluster
11:59 skoduri joined #gluster
12:01 ashiq joined #gluster
12:03 dusmant joined #gluster
12:11 shubhendu joined #gluster
12:15 gildub joined #gluster
12:17 rjoseph joined #gluster
12:17 poornimag joined #gluster
12:18 jtux joined #gluster
12:19 jcastill1 joined #gluster
12:21 atalur joined #gluster
12:22 unclemarc joined #gluster
12:23 skoduri joined #gluster
12:24 jcastillo joined #gluster
12:29 nishanth joined #gluster
12:32 plarsen joined #gluster
12:34 Slashman joined #gluster
12:34 pdrakeweb joined #gluster
12:38 papamoose joined #gluster
12:40 jrm16020 joined #gluster
12:41 ndevos @later tell cppking untested scsi-target-utils with gluster support for el6: https://devos.fedorapeople.org/tmp/scsi-target-utils/
12:41 glusterbot ndevos: The operation succeeded.
12:49 calavera joined #gluster
12:53 kotreshhr left #gluster
12:58 dlambrig joined #gluster
13:00 spalai joined #gluster
13:00 spalai left #gluster
13:02 Alex31 rastar_afk: when you come back ... you're rights, the cause of the slowly copy was the bandwith :-/
13:02 Alex31 rastar_afk: no really the bandwith, but the response time...
13:03 Alex31 rastar_afk: I have a multi site architecture enter France and Tunisia. The bandwith is good but the response time not very
13:04 marbu joined #gluster
13:05 Alex31 rastar_afk: is there a way to configure the copy in asynchronous mode ?
13:05 volga629 joined #gluster
13:08 doekia joined #gluster
13:11 atalur joined #gluster
13:12 Alex31 rastar_afk: don't worry ...  geo replication seem to be what I want
13:20 pdrakeweb joined #gluster
13:25 dgandhi joined #gluster
13:37 jrm16020 joined #gluster
13:45 shyam joined #gluster
13:46 julim joined #gluster
13:47 B21956 joined #gluster
13:49 Alex31 where can I find the new documentation for geo-replication ? in the actual doc, there is a link for GFS > 3.4 but it doesn't work
13:49 Alex31 link: http://www.gluster.org/community/documentation/index.php/HowTo:geo-replication
13:50 spcmastertim joined #gluster
13:51 raghu left #gluster
13:52 volga629 joined #gluster
13:53 haomaiwa_ joined #gluster
13:56 dlambrig joined #gluster
13:58 plarsen joined #gluster
14:07 mbukatov joined #gluster
14:10 haomaiwa_ joined #gluster
14:14 shyam joined #gluster
14:16 Lee1092 joined #gluster
14:17 spalai joined #gluster
14:24 drankis joined #gluster
14:31 neofob joined #gluster
14:34 atalur joined #gluster
14:51 mckaymatt joined #gluster
14:52 plarsen joined #gluster
14:52 anonymus joined #gluster
14:54 jcastill1 joined #gluster
14:54 ndevos Alex31: this is more current http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Distributed%20Geo%20Replication/
14:54 glusterbot Title: Distributed Geo Replication - Gluster Docs (at gluster.readthedocs.org)
14:55 mckaymatt joined #gluster
14:55 ndevos Alex31: maybe also http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Geo%20Replication/
14:55 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.org)
14:58 corretico joined #gluster
14:58 Alex31 ndevos: thanks you !
14:58 hgowtham joined #gluster
14:58 Alex31 the doc look like more newer ;)
15:01 cyberswat joined #gluster
15:04 ndevos Alex31: yes, all the current wiki pages shave been moved there, we're still waiting for a redirection from the old docs
15:04 Bhaskarakiran joined #gluster
15:05 ndevos hchiramm_: ^ that really needs some more pushing, could you check the latest progress?
15:06 _Bryan_ joined #gluster
15:08 meghanam joined #gluster
15:09 skoduri joined #gluster
15:10 _maserati joined #gluster
15:10 shyam joined #gluster
15:10 haomaiwa_ joined #gluster
15:11 jcastillo joined #gluster
15:15 spalai joined #gluster
15:19 pdrakewe_ joined #gluster
15:21 pdrakewe_ joined #gluster
15:22 plarsen joined #gluster
15:27 mckaymatt joined #gluster
15:50 shyam joined #gluster
15:54 cholcombe joined #gluster
15:57 jdossey joined #gluster
15:58 jwaibel joined #gluster
16:03 Bhaskarakiran joined #gluster
16:08 bennyturns joined #gluster
16:11 bennyturns joined #gluster
16:21 jbautista- joined #gluster
16:24 plarsen joined #gluster
16:24 anonymus joined #gluster
16:26 Ali_ joined #gluster
16:29 squizzi_ joined #gluster
16:30 s19n left #gluster
16:31 jbautista- joined #gluster
16:32 firemanxbr joined #gluster
16:35 rafi joined #gluster
16:36 rastar_afk joined #gluster
16:36 anil joined #gluster
16:36 hchiramm_ joined #gluster
16:36 sac joined #gluster
16:36 rp_ joined #gluster
16:36 lalatenduM joined #gluster
16:41 Ali_ Hi Community Maybe i have a error in my gluster network design.  Is it possible to have a setup like that: 2 Server with 60TB localdiskspace - Disk-Replication  with Gluster over interconnect. glusterfs-Client connects to the Gluster over the Prod interface? (connectivity described here http://pastebin.com/xUXXZ4Ja) Servers connected through 10Gig Interfaces including the direct connects.
16:41 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:42 Ali_ My problem is, when the glusterfs connects to Server 1 10.0.0.10:/test_vol its receiving the hostnames of the directconnect interface.
16:42 wushudoin| joined #gluster
16:42 5EXABYNNL joined #gluster
16:43 Ali_ http://ur1.ca/nhrio
16:43 glusterbot Title: #257256 Fedora Project Pastebin (at ur1.ca)
16:44 spalai joined #gluster
16:44 PerJ joined #gluster
16:47 wushudoin| joined #gluster
16:47 Ali_ @paste
16:47 glusterbot Ali_: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
16:50 Ali_ Hi Community. Is it possible to have a setup like that:  2 Servers with 60TB localdiskspace - Disk-Replication  with Gluster over interconnect. glusterfs-Client connects to the Gluster over the Prod interface?
16:51 Ali_ Servers connected through 10Gig Interfaces including the direct connects. My problem is, when the glusterfs connects to Server 1 10.0.0.10:/test_vol its receiving the hostnames of the directconnect interface.
16:51 Ali_ Connectivity described here http://ur1.ca/nhrio
16:51 glusterbot Title: #257256 Fedora Project Pastebin (at ur1.ca)
16:56 mckaymatt joined #gluster
16:56 jbautista- joined #gluster
16:58 dzany joined #gluster
17:00 dzany Hi all. We are having increased server load, and I would appreciate someones help. So the deepest log message that I have found is: 0-gfsvolume1-posix: mknod on /gluster-storage/moodledata/muc/config.php failed: File exists
17:00 klaas joined #gluster
17:00 firemanxbr_ joined #gluster
17:01 firemanxbr_ joined #gluster
17:02 jbautista- joined #gluster
17:04 shyam joined #gluster
17:08 jbautista- joined #gluster
17:08 Rapture joined #gluster
17:09 wushudoin| joined #gluster
17:11 trav408 joined #gluster
17:12 pdrakeweb joined #gluster
17:14 wushudoin| joined #gluster
17:20 julim joined #gluster
17:44 jrm16020 joined #gluster
17:49 dzany we found an error. Thanks who ever wanted to respond :)
17:57 justicefries left #gluster
17:57 techsenshi joined #gluster
18:00 shyam1 joined #gluster
18:00 firemanxbr joined #gluster
18:03 KyleG joined #gluster
18:04 KyleG o great #gluster admins, I am but a peer looking for thoughts and experiences on people running gluster in production with datasets 300 TB -> 1 PB in size.
18:04 johndescs_ joined #gluster
18:04 KyleG Considering it for my environment right now, curious as to how much management overhead and how stable gluster is now-a-days in a real world production scenario
18:10 shaunm joined #gluster
18:14 Twistedgrim joined #gluster
18:25 pdrakeweb joined #gluster
18:29 ctria joined #gluster
18:35 unclemarc joined #gluster
18:45 jbautista- joined #gluster
18:49 jbautista- joined #gluster
18:54 _maserati joined #gluster
19:03 Twistedgrim joined #gluster
19:17 jwd joined #gluster
19:25 spcmastertim joined #gluster
19:26 muneerse joined #gluster
19:28 JoeJulian KyleG: Sorry for the non-answer, but it depends on workload.
19:30 Peppard joined #gluster
19:30 JoeJulian @later tell Ali_ The clients do the replication so they need to connect to your brick servers as the hostnames are defined in the volume. To achieve the layout you're asking for, you can use split-horizon dns.
19:30 glusterbot JoeJulian: The operation succeeded.
19:33 leucos joined #gluster
19:47 anoopcs joined #gluster
19:53 ira joined #gluster
19:56 janegil joined #gluster
20:02 wushudoin| joined #gluster
20:10 DV joined #gluster
20:10 cornfed78 joined #gluster
20:11 cornfed78 Hi all
20:11 cornfed78 wondering if anyone can come to the rescue again :)
20:11 cornfed78 I upgraded one  node in my 3-node cluster from 3.6.3 to 3.7.3
20:12 wushudoin| joined #gluster
20:12 cornfed78 no errors in the upgrade or anything, and I followed the instructions for enabling 'server.allow-insecure on'
20:12 cornfed78 however, on the upgraded node, when I do a peer status, the other two nodes are disconnected.
20:12 cornfed78 On the upgraded nodes, the peers are all connected..
20:13 JoeJulian Did you add "option rpc-auth-allow-insecure on" to /etc/glusterfs/glusterd.vol ?
20:13 JoeJulian Which requires restarting glusterd as well.
20:13 badone_ joined #gluster
20:14 cornfed78 http://ur1.ca/nhu4h
20:14 glusterbot Title: #257335 Fedora Project Pastebin (at ur1.ca)
20:14 cornfed78 I did on the upgraded host
20:15 cornfed78 I only read about that one after I started :(
20:15 JoeJulian You need to on the older servers. They're the ones that are going to deny the connections from the unprivileged ports.
20:15 cornfed78 damn
20:15 cornfed78 OK
20:15 JoeJulian You can restart glusterd any time.
20:16 cornfed78 well, I have live VMs on the un-upgraded server
20:16 cornfed78 and those gluster vols aren't replicated :(
20:16 JoeJulian The only effect that might have is if a client is trying to mount from that server at the moment glusterd is down.
20:16 JoeJulian The bricks remain running.
20:16 cornfed78 how would that affect single-brick volumes loaded in ovirt?
20:16 JoeJulian it wouldn't.
20:17 ctria joined #gluster
20:17 cornfed78 i see
20:17 cornfed78 so, existing mounts remain active?
20:17 JoeJulian yep
20:17 JoeJulian @services
20:18 JoeJulian @meh
20:18 glusterbot JoeJulian: I'm not happy about it either
20:18 cornfed78 heh
20:18 cornfed78 OK.. maybe I'll try it after 5 PM :)
20:18 cornfed78 little nervous about it
20:18 JoeJulian glusterd is only the management daemon. glusterfsd is the actual brick servers.
20:19 JoeJulian You can try it elsewhere. Start up a small cluster in some VMs. Create a volume, start it, then stop glusterd.
20:21 cornfed78 gave it a go on a less-important node.. seems to have worked!
20:23 cornfed78 thanks a bunch!
20:23 JoeJulian You're welcome
20:23 cornfed78 hopefully 3.7.3 will fix my sparse image inflation issue..
20:24 jbautista- joined #gluster
20:29 jbautista- joined #gluster
20:46 remmo123 joined #gluster
20:46 remmo123 hey all
20:46 remmo123 any setup a glusterfs for ploop/openvz ?
20:47 remmo123 interested in any feedback on glusterfs -> nfs -> ploop
20:54 johanfi joined #gluster
20:55 johanfi Hi.. I'm looking to further use of gluster to expand to use gluster as an object storage
20:55 johanfi does anyone have a production workload running on gluster?
20:56 johanfi Our primary use case would be the following: central servers running gluster, around gluster (and on the same network) legacy nodes and legacy applications
20:59 johanfi and then we would like to use swift-on-file to access data from gluster, and to write the data from nodes externally, again, coupling them with legacy apps
21:01 johanfi but we don't want the following:
21:01 johanfi 1. Having to use the entire openstack ecosystem
21:01 johanfi 2. re-engineer old legacy
21:05 elico joined #gluster
21:19 togdon joined #gluster
21:32 gletessier joined #gluster
21:37 timotheus1 joined #gluster
21:37 ctria joined #gluster
21:37 primehaxor joined #gluster
21:38 primehaxor hello, im trying to mount a brick via nfs but when i write the data isnt replicated
21:50 owlbot joined #gluster
21:50 Rapture joined #gluster
22:03 jobewan joined #gluster
22:11 mckaymatt joined #gluster
22:30 Rapture joined #gluster
22:41 johanfi joined #gluster
23:03 jobewan joined #gluster
23:03 * JoeJulian types answers for impatient people who've already left.
23:24 nangthang joined #gluster
23:30 jrm16020 joined #gluster
23:32 jermudgeon The link to the GlusterFS admin guide is wrong at http://www.gluster.org/community/documentation/index.php/Main_Page
23:42 primehaxor joined #gluster
23:43 primehaxor Hello, is it possible to use a replicated brick with nfs? i've mounted the volume via nfs but the data isnt replicated
23:47 JoeJulian jermudgeon: fixed
23:48 JoeJulian primehaxor: no. You can use a replicated volume with nfs, but the brick belongs to glusterfs.
23:48 JoeJulian @nfs
23:48 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
23:49 primehaxor @JoeJulian using the gluster fuse the replication works but i've write just about 10MB/sec :(
23:49 primehaxor with nfs i write about 200MB/sec
23:49 JoeJulian Maybe you're doing it wrong.
23:49 jermudgeon JoeJulian: thanks. Also, I’m using splitmount, great tool. Is there a similar tool for parsing the attribs as described here? http://gluster.readthedocs.org/en/latest/Troubleshooting/split-brain/
23:49 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
23:49 JoeJulian So how are you doing this write test?
23:50 primehaxor @JoeJulian im running a rsync from a server to brick
23:50 JoeJulian jermudgeon: no. That page has new features as of 3.7 that will make splitbrain obsolete.
23:51 JoeJulian primehaxor: You're not supposed to write to bricks. Also, rsync is horribly slow for populating a volume. You're better off with a simple cpio.
23:51 JoeJulian If you do need to use rsync, be sure to use --inplace.
23:52 JoeJulian That said, using nfs for populating your volume is certainly valid and may be faster than using fuse.
23:53 JoeJulian Like glusterbot said. Make sure the kernel nfs server is disabled before starting glusterd.
23:53 * jermudgeon updates to 3.7
23:53 JoeJulian Then you're mounting the volume. Mount server:/volume name.
23:55 gildub joined #gluster
23:56 primehaxor @JoeJulian im need to move about 1TB to the volume
23:57 primehaxor if the nfs mount repliace to another brick
23:57 primehaxor and i can use the fuse client later
23:58 JoeJulian @glossary
23:58 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
23:59 JoeJulian You keep saying brick and I want to make sure you understand that if you write to a brick, you're going to cause problems for your volume.
23:59 primehaxor hmm gotcha
23:59 JoeJulian Gluster has its own nfs server. You can mount the volume via nfs.
23:59 primehaxor ohhh thats should be when i run showmount with nfs kernel server enabled i dont see anything

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary