Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 badone joined #gluster
00:09 tg2 JoeJulian, any reason gluster couldn't use a secondary hash to "fall bacK" on should the first supposed location for a file be unavailable?
00:09 tg2 so no broadcasting is required
00:10 tg2 unless 2 misses are hit
00:10 JoeJulian tg2: No reasons that I know of.
00:10 JoeJulian Though I suppose the quicker way would be sequentially.
00:11 avati the broadcast was really mean to be a transient behavior while a rebalance is in progress
00:11 tg2 or read it backwards
00:11 JoeJulian ah
00:11 tg2 broadcast can be the fallback still
00:11 tg2 you'd just have a second line of defense
00:11 JoeJulian avati is the expert
00:11 tg2 I thought gluster already worked like this
00:11 avati but it has stuck as default, for not very strong reasons
00:11 tg2 i was a bit surprised when I found out it went straight to broadcast if it missed on the first hash
00:12 tg2 so how does it work, internally, if you do a rebalance and the files can no longer be on their "ideal" node as per dht?
00:12 avati it was kept to address a corner case of crash consistency.. for normal operations you do not need broadcast fallback.. you can turn it off with "gluster volume set <name> cluster.lookup-unhashed off"
00:13 avati tg2, that's why you need the unhashed lookup, when things are "up in the air" during a rebalance
00:13 tg2 so, for a write operation
00:13 tg2 that the hash lands on a server that is unavailable
00:13 tg2 how does it determine target?
00:14 avati well, write() is fine.. you have an FD established already, and once you have an FD there's no more broadcasts
00:14 avati open() is what would trigger a broadcast (if at all)
00:14 tg2 well, lets say, new file
00:14 tg2 so (o) yes
00:15 tg2 how does it determine which node to put that new file on if it's first dht target is off
00:15 yinyin_ joined #gluster
00:15 avati if the hash lands on a server that is unavailable, new file creations fail
00:15 tg2 hm
00:15 tg2 the same "fallback" hash could resolve this too
00:16 avati it is dangerous to use this for new file creation
00:16 avati because the server which is down (temporarily) may actually hold the same filename
00:16 tg2 right but if you checked for that file while the server was down
00:16 tg2 it wouldn't show
00:17 avati yep, it wouldn't show
00:17 tg2 it would be a versioning conflict, true
00:17 Ark joined #gluster
00:17 tg2 but if you just wrote it to a secondary server as per alternative dht hashing, then you could resume it from there too
00:17 tg2 but that is a lot of extra logic
00:18 avati i'm not sure applications written on top of POSIX api's would work meaningfully if gluster behaved that way
00:18 tg2 ok so lets just assume its for existing files that have been reassigned to another node which is not the determined dht node
00:19 avati the file migration algorithm places linkfiles while migration is in process
00:19 tg2 could it be moved to a secondary host deterministically so that clients could also try to determine which node itw ould be on?
00:20 avati but there is a gap between rewriteing the new layout in the dir xattrs and creating linkfiles
00:21 avati for that gap unhashed lookup is the best balance between correctness and performance
00:21 tg2 ok so its not like it broadcasts every time it can't find a file on the intended server, if that server has a linkfile in place it will tell the client to check the other server
00:21 avati yeah, no broadcast if linkfile is present
00:22 tg2 failing that, how /could/ another node have the file?
00:22 tg2 if gluster didn't move it there
00:22 avati for crash consistency.. if your data center powered off right after a rename() and the linkfile had not yet gotten committed into the backend (xfs) journal
00:22 avati etc.
00:24 tg2 so when you rename a file it creates a linkfile on the new intended target
00:24 tg2 until it's been moved over
00:24 avati yep, if a rename changes the hash to a different server, we don't move the contents over
00:24 avati instead rename it locally and place a lnkfile in the new hashed server
00:24 tg2 any way to deterministically tell from the application level if a rename will cause a server change?
00:24 avati why should the application care?
00:24 tg2 assuming access to xattr
00:25 tg2 if i wanted to build my own auto-levelling script
00:25 tg2 by changing the filename to force move to another server :)
00:25 tg2 i have application-level control of files on disk
00:25 tg2 could just rename it to something that would cause it to go to another host
00:25 avati it is trivial'ish to open up an API to allow application to request moving a physical file to another server
00:26 avati through a setxattr() or something
00:26 tg2 but it's not in there now right
00:26 tg2 i guess what i mean is - if you were to forcibly move a file to a new server that isn't its "ideal" dht target
00:26 avati it's not in there.. but 99% of it is implemented
00:26 tg2 you'd have to maintain link files
00:26 tg2 and presumably some kind of "keep it here" xattr
00:26 ravindran1 joined #gluster
00:27 avati yes, forcefully moving a file out would result in the creation of a linkfile in the "proper" server
00:27 tg2 but if you, at the application level, just renamed the file to something that would cause it to go to a determined new host
00:27 tg2 that would aleviate that
00:27 avati rename() messes with the namespace.. setxattr() is a cleaner api
00:27 avati for this job
00:28 tg2 but that creates a linkfile right?
00:29 avati so would rename.. unless you created a filename which hashes to the desired server (and still do a setxattr() to _move_ the contents over)
00:29 tg2 thast what i'm getting at
00:29 avati how would you know what filename to rename to?
00:29 tg2 why is setxattr required to make it move? gluster doesn't automatically queue this file to be moved to the new host?
00:29 avati as i said already, just renaming a file does not move it over physically
00:30 tg2 i thought gluster would see that the name changed and now the ideal target is elsewhere and queue it for move
00:30 avati it would see the name changed and place a lnkfile in the ideal target to avoid broadcasts
00:30 ninkotech joined #gluster
00:30 tg2 yeah, but it wouldn't go so far as to actually move it over
00:31 avati you don't want files moving around gigabytes with arbitrary renames!
00:31 tg2 and as it is now i could issue an xattr request to have it moved to it's host
00:31 tg2 effectively doing a manual rebalance
00:32 tg2 (even without changing the file name, if desired)
00:32 avati with a trivial enhancement you should be able to issue an xattr and request moving the file
00:32 avati it is not there yet, 99% ready
00:32 tg2 ah ok
00:33 tg2 that would be pretty cool
00:33 tg2 how is the erasure parity stuff coming? that seems wildly complicated from a logic point of view but a pretty amazing feature
00:33 avati think of it as a variance on replication
00:33 tg2 has anybody else acheived this in a distributed system?
00:34 avati xavih has a working prototype
00:34 avati which, hopefully, will make its way into glusterfs master in a while
00:34 tg2 you could do some checksumming too on read to make sure the data integrity is good
00:34 tg2 like zfs does
00:34 JoeJulian tg2: erasure coding is coming.
00:35 tg2 it would have to work on block level though right?
00:35 JoeJulian oh, that's been mentioned..
00:35 tg2 last thing i saw was the blog post that paul wrote way back in october
00:36 cleverfoo_ joined #gluster
00:36 tg2 lots of code to fragment
00:37 tg2 is there any more recent documentation on it?
00:42 tg2 ah i see it on glusterforge
00:48 chirino_m joined #gluster
00:51 gdubreui joined #gluster
01:06 marcoceppi joined #gluster
01:06 marcoceppi joined #gluster
01:07 hagarth joined #gluster
01:44 marcoceppi_ joined #gluster
01:46 nueces joined #gluster
01:50 marcoceppi joined #gluster
01:50 marcoceppi joined #gluster
01:57 marcoceppi joined #gluster
01:57 marcoceppi joined #gluster
02:00 baojg joined #gluster
02:00 marcoceppi joined #gluster
02:00 marcoceppi joined #gluster
02:02 dhh joined #gluster
02:03 Honghui left #gluster
02:04 Honghui joined #gluster
02:06 joshin joined #gluster
02:06 joshin joined #gluster
02:12 nueces joined #gluster
02:15 marcoceppi joined #gluster
02:16 Honghui left #gluster
02:18 jmarley joined #gluster
02:18 jmarley joined #gluster
02:38 harish joined #gluster
02:49 gdubreui joined #gluster
02:51 rastar joined #gluster
02:55 Honghui joined #gluster
02:55 Honghui left #gluster
03:03 nueces joined #gluster
03:24 wushudoin joined #gluster
03:25 bala joined #gluster
03:32 dbruhn joined #gluster
03:35 dbruhn samba
03:35 dbruhn @samba
03:35 glusterbot dbruhn: Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/
03:38 dbruhn @learn
03:38 glusterbot dbruhn: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
03:39 dbruhn @learn samba as Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/ mor information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
03:39 glusterbot dbruhn: The operation succeeded.
03:39 dbruhn @samba
03:39 glusterbot dbruhn: (#1) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#2) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/ mor information about alternate samba
03:39 glusterbot dbruhn: configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
03:42 dbruhn @learn samba as Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#2) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/
03:42 glusterbot dbruhn: The operation succeeded.
03:47 itisravi joined #gluster
03:51 shubhendu joined #gluster
04:06 chirino joined #gluster
04:11 ppai joined #gluster
04:11 ravindran1 joined #gluster
04:14 vpshastry joined #gluster
04:20 ndarshan joined #gluster
04:22 kanagaraj joined #gluster
04:38 nueces joined #gluster
04:41 ngoswami joined #gluster
04:44 deepakcs joined #gluster
04:45 theron joined #gluster
04:48 glusterbot New news from newglusterbugs: [Bug 969384] [FEAT] Reduce the number of crawls the self-heal daemon needs to make <https://bugzilla.redhat.com/show_bug.cgi?id=969384>
04:48 baojg joined #gluster
04:49 rahulcs joined #gluster
04:49 haomai___ joined #gluster
04:54 hchiramm_ joined #gluster
04:55 davinder joined #gluster
04:55 kshlm joined #gluster
04:56 kdhananjay joined #gluster
04:58 kanagaraj joined #gluster
05:00 bala joined #gluster
05:06 Franklu joined #gluster
05:06 sputnik1_ joined #gluster
05:10 franklu_ joined #gluster
05:10 dusmant joined #gluster
05:10 baojg_ joined #gluster
05:11 RameshN joined #gluster
05:18 shubhendu joined #gluster
05:22 rjoseph joined #gluster
05:22 prasanth_ joined #gluster
05:24 bala joined #gluster
05:25 Franklu Hi, I saw a special bug of server.allow-insecure. Anyone could give some suggestion? I paste the details in https://gist.github.com/mflu/11166168
05:25 glusterbot Title: client-bind-insecure (at gist.github.com)
05:28 snehal joined #gluster
05:29 Franklu Should I restart glusterd after I set the server.allow-insecure? In 3.3. we don't need this to make it effect.
05:30 bala1 joined #gluster
05:38 aravindavk joined #gluster
05:38 raghu joined #gluster
05:39 ravindran1 joined #gluster
05:48 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <https://bugzilla.redhat.com/show_bug.cgi?id=921215>
05:50 kaushal_ joined #gluster
05:53 ramteid joined #gluster
05:55 nshaikh joined #gluster
05:55 hagarth joined #gluster
05:56 haomaiwang joined #gluster
05:57 haomai___ joined #gluster
06:00 latha joined #gluster
06:01 snehal joined #gluster
06:02 ravindran1 left #gluster
06:02 auganov joined #gluster
06:04 atinmu joined #gluster
06:04 lalatenduM joined #gluster
06:06 RameshN joined #gluster
06:10 sputnik1_ joined #gluster
06:11 ricky-ticky joined #gluster
06:13 prasanth_ joined #gluster
06:14 Andy5_ joined #gluster
06:17 monotek joined #gluster
06:18 bala2 joined #gluster
06:18 psharma joined #gluster
06:26 rgustafs joined #gluster
06:33 davinder joined #gluster
06:35 Philambdo joined #gluster
06:36 Honghui joined #gluster
06:39 chirino joined #gluster
06:41 ktosiek joined #gluster
06:41 Philambdo joined #gluster
06:43 rahulcs_ joined #gluster
06:46 rahulcs__ joined #gluster
06:48 rahulcs joined #gluster
06:49 meghanam joined #gluster
06:50 meghanam_ joined #gluster
06:51 haomaiwa_ joined #gluster
06:54 RameshN joined #gluster
07:03 Honghui joined #gluster
07:09 Honghui joined #gluster
07:10 DV joined #gluster
07:11 saurabh joined #gluster
07:11 shubhendu joined #gluster
07:13 eseyman joined #gluster
07:14 Honghui Hi, I'm concern about scalability of glusterfs. Say if I have 2 nodes and decide to extend it to 3 nodes. Will all the file be scanned and most of data be moved between nodes?
07:18 lalatenduM Honghui, no for 3 2 to 3 nodes, but yes and no both, if you are expanding it to higher e.g. 10 nodes
07:19 lalatenduM s/3 2/2/
07:19 glusterbot What lalatenduM meant to say was: Honghui, no for 2 to 3 nodes, but yes and no both, if you are expanding it to higher e.g. 10 nodes
07:20 lalatenduM Honghui, and there are some methods which will reduce the data migration
07:23 lalatenduM Honghui, you can enable NUFA if you cant to migrate data https://github.com/gluster/glusterfs/blob/master/doc/features/nufa.md
07:23 glusterbot Title: glusterfs/doc/features/nufa.md at master · gluster/glusterfs · GitHub (at github.com)
07:23 Honghui As I know, glusterfs does not split the hash space as openstack swift does, it make glusterfs more differcult to scale.
07:23 keytab joined #gluster
07:23 lalatenduM s/cant/dont want to/
07:23 glusterbot What lalatenduM meant to say was: Honghui, you can enable NUFA if you dont want to to migrate data https://github.com/gluster/glusterfs/blob/master/doc/features/nufa.md
07:24 lalatenduM Honghui, AFAIK has algo of openstack swift and glusterfs are similar, not aware of this difference
07:25 lalatenduM s/has//
07:25 glusterbot What lalatenduM meant to say was: Honghui, AFAIK  algo of openstack swift and glusterfs are similar, not aware of this difference
07:25 Honghui openstack swift has partition, it split the hash space.
07:26 lalatenduM Honghui, not sure about it, ppai ^^
07:27 Andy5_ fwir after expanding the cluster you can rebalance. this will recalculate the hash space and rebalance the files.
07:27 Honghui lalatenduM, we plan to buy 2 Dell R720xd, both with 12 disks, each build in raid6, then deploy glusterfs on them.
07:28 ppai Honghui, Swift ensures even distribution by splitting entire has space into partitions and assigning partitions to devices...GlusterFS hashing is not done that way
07:28 ppai s/has/hash
07:29 ppai Honghui, Swift needs to maintain this is Ring files, GlusterFS has no such data structure/provision
07:29 Honghui ppai, you're talking me.
07:30 ppai Honghui, There's a lot of difference in Swift's hashing mechanism and GlusterFS's hashing
07:30 Honghui ppai, I agree.
07:30 lalatenduM Honghui, cool, however I am not sysadmin, but I know couple of awesome Sysadmins hang around here
07:31 giannello joined #gluster
07:31 Honghui I hope we can found a solution to provide a posix compiable storage. swift does not provide that.
07:32 lalatenduM yes, gluster is posix compatible storage
07:32 lalatenduM Honghui, what is your usecase?
07:32 Honghui We tried glusterfs in amazon ec2 and some small scale in our datacenter. we encounter some issue which block us from use it more.
07:34 Honghui lalatenduM, we use glusterfs to store billion of small files.
07:34 ppai Honghui, there's an upcoming feature in Swift - "storage policies" (WIP) ..it'll give lot of flexibility in choosing where and how objects are stored. You can look that up
07:36 Honghui ppai, thanks, let me have a look
07:36 ppai Honghui, https://swiftstack.com/blog/2014/01/27/openstack-swift-storage-policies/
07:36 glusterbot Title: Coming soon – Storage Policies in OpenStack Swift - SwiftStack (at swiftstack.com)
07:36 lalatenduM Honghui, you have one of the tough use case :) , you have more writes or reads?
07:37 lalatenduM Honghui, check this out, (a production setup of glusterfs) http://www.youtube.com/watch?v=Ep4C2XWsG8o
07:37 glusterbot Title: Gluster Hangout with Daniel Mons from Cutting Edge - YouTube (at www.youtube.com)
07:37 chirino joined #gluster
07:37 Honghui 4 read vs 1 write
07:39 lalatenduM Honghui, ok
07:39 lalatenduM Honghui, check this out https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-Administration_Guide-Performance_Enhancements.html
07:39 glusterbot Title: Chapter 11. Configuring Red Hat Storage for Enhancing Performance (at access.redhat.com)
07:39 lalatenduM will be applicable to gluster too
07:41 davinder2 joined #gluster
07:47 fsimonce joined #gluster
07:50 haomaiw__ joined #gluster
07:58 wgao joined #gluster
08:03 Norky joined #gluster
08:05 liquidat joined #gluster
08:08 jtux joined #gluster
08:14 kumar joined #gluster
08:17 ngoswami joined #gluster
08:19 glusterbot New news from newglusterbugs: [Bug 1089906] [SNAPSHOT]: Restore is successful even when peer glusterd/nodes are down. <https://bugzilla.redhat.com/show_bug.cgi?id=1089906>
08:26 ravindran2 joined #gluster
08:27 Andyy3 joined #gluster
08:32 Andy6 joined #gluster
08:35 haomaiwa_ joined #gluster
08:35 eseyman joined #gluster
08:41 saravanakumar joined #gluster
08:45 haomai___ joined #gluster
08:53 vimal joined #gluster
08:53 rbw joined #gluster
09:03 vpshastry1 joined #gluster
09:10 chirino joined #gluster
09:10 rahulcs joined #gluster
09:18 smithyuk1 joined #gluster
09:19 davinder joined #gluster
09:23 rahulcs joined #gluster
09:23 vpshastry1 joined #gluster
09:23 smithyuk1 Morning all, kicked off a rebalance of our data a week or so ago and all seemed to be going well. Then suddenly one of our drives took on all the data, I have uploaded a graph for viewing ease http://i.imgur.com/KgtDIBw.png
09:24 davinder joined #gluster
09:24 smithyuk1 It was a forced rebalance and that graph is showing the last 2 weeks
09:27 jmarley joined #gluster
09:27 jmarley joined #gluster
09:29 ravindran1 joined #gluster
09:30 ravindran1 left #gluster
09:34 jtux joined #gluster
09:35 Honghui smithyuk1, the line shows the disk space vs time?
09:36 smithyuk1 that's right yeah
09:36 Honghui the available disk space?
09:36 Honghui free space?
09:36 smithyuk1 the free disk space
09:36 smithyuk1 sorry i should have labelled it
09:43 chirino joined #gluster
09:45 dusmant joined #gluster
09:49 jmarley joined #gluster
09:49 jmarley joined #gluster
09:56 harish_ joined #gluster
10:05 jmarley joined #gluster
10:05 jmarley joined #gluster
10:06 deepakcs joined #gluster
10:07 dbruhn_ joined #gluster
10:07 overclk_ joined #gluster
10:12 deepakcs joined #gluster
10:19 Slashman joined #gluster
10:33 edward2 joined #gluster
10:34 glafouille joined #gluster
10:36 glafouille hi, I have two nodes replicated with 3.5 and when one goes down (ifdown eth1 for instance), I can't write on the glusterfs mounted brick for 1 exactly minute, then it's okay
10:37 glafouille is this normal?
10:37 glafouille how can I reduce this time?
10:38 RameshN joined #gluster
10:41 samppah @ping-timeout
10:41 glusterbot samppah: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
10:41 samppah glusterbot `
10:41 samppah ^
10:41 samppah errr
10:41 samppah glafouille: that was for you :)
10:43 Andy5__ joined #gluster
10:46 rahulcs joined #gluster
10:51 ira_ joined #gluster
10:53 glafouille samppah
10:53 glafouille sorry
10:55 glafouille glusterbot: so I can't go down 42 seconds...
10:56 glafouille glusterbot: in critical environnment, I loose 42 seconds of data
10:57 glafouille This is amazing because, when I switch the interface up, I get back instantanetly my volume
10:57 samppah glafouille: ah, you can tune it with gluster volume set volName network.ping-timeout value
10:58 glafouille samppah : good new
10:58 samppah sorry, i thought that glusterbot would have told that :)
11:00 glafouille I'm trying it
11:00 glafouille no config file...
11:01 fsimonce` joined #gluster
11:01 glafouille samppah : how can I see the current value?
11:03 glafouille gluster volume info volName doesn't give me any piece of information about network.ping
11:03 glusterbot glafouille: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
11:04 samppah heh
11:04 samppah glafouille: gluster vol set help should show default values
11:04 lpabon joined #gluster
11:04 samppah and default is 42
11:05 monotek joined #gluster
11:05 dusmant joined #gluster
11:06 glafouille glutserbot : I don't know what I've done for naked ping!
11:07 kkeithley1 joined #gluster
11:09 glafouille samppah: must I tune it it on all gluster node?
11:10 samppah glafouille: only once per volume
11:11 baojg joined #gluster
11:12 glafouille samppah: it's working
11:12 samppah good :)
11:12 glafouille I set the value to 1
11:13 glafouille trying 0 to  see if it's possible
11:13 samppah i wouldn't go that low
11:13 glafouille it complies that 1 is the minimum
11:15 glafouille samppah: thnks for your help
11:20 ninkotech joined #gluster
11:20 ira_ joined #gluster
11:22 divbell joined #gluster
11:24 aravindavk joined #gluster
11:35 rahulcs_ joined #gluster
11:39 rahulcs joined #gluster
11:42 jmarley joined #gluster
11:42 jmarley joined #gluster
11:42 dusmant joined #gluster
11:50 glusterbot New news from newglusterbugs: [Bug 914641] Rebalance Stop Command does not give proper message <https://bugzilla.redhat.com/show_bug.cgi?id=914641>
11:56 B21956 joined #gluster
11:57 diegows joined #gluster
11:57 vimal joined #gluster
11:59 B21956 joined #gluster
12:01 itisravi joined #gluster
12:01 vimal joined #gluster
12:04 rbw joined #gluster
12:05 bala1 joined #gluster
12:09 atrius joined #gluster
12:11 Norky joined #gluster
12:12 itisravi_ joined #gluster
12:13 brutuz hi all
12:13 brutuz i was wondering if there is a howto on removing gfs
12:14 brutuz gluterfs in my servers?
12:15 rahulcs joined #gluster
12:22 Andy5__ joined #gluster
12:22 brutuz anyone can help me removing glusterfs in my system?
12:24 Ark joined #gluster
12:27 theron_ joined #gluster
12:27 ppai brutuz, did you install via RPM or from source ?
12:27 brutuz using package
12:28 brutuz thank in advance ppai.
12:29 ppai yum remove glusterfs
12:29 edward1 joined #gluster
12:29 lpabon joined #gluster
12:29 rbw joined #gluster
12:30 brutuz but there are some client boxes mounting on the gfs on server1
12:30 brutuz do i unmount them first?
12:30 ppai brutuz, yes
12:31 mjsmith2 joined #gluster
12:33 brutuz can i kill the glusterfs process running on the client?
12:33 ppai brutuz, sure
12:34 brutuz if i stop glusterfs server what remove glusterfs
12:34 brutuz what happens to the data?
12:35 ppai brutuz, data already written will be as is
12:35 ppai brutuz, http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting
12:35 glusterbot Title: Basic Gluster Troubleshooting - GlusterDocumentation (at www.gluster.org)
12:35 ppai brutuz, section 9
12:35 ppai left #gluster
12:36 ppai joined #gluster
12:39 ppai brutuz, feel free to shoot a mail to "gluster-users" mailing list if you have further questions :) cya
12:39 brutuz thanks ppai
12:45 chirino joined #gluster
12:46 neofob joined #gluster
12:50 glusterbot New news from newglusterbugs: [Bug 1090041] [SNAPSHOT] : Need modification in message while creating snapshot <https://bugzilla.redhat.com/show_bug.cgi?id=1090041> || [Bug 1090042] [SNAPSHOT]: Able to create snapshot when one of the brick is down even without force <https://bugzilla.redhat.com/show_bug.cgi?id=1090042>
12:57 rahulcs joined #gluster
13:02 sroy_ joined #gluster
13:11 dbruhn joined #gluster
13:11 hchiramm_ joined #gluster
13:15 chirino joined #gluster
13:18 xiu joined #gluster
13:19 xiu hi, i have a lot of files in .glusterfs/indices/xattrop (26k), why would some files stall there? what can I do about it?
13:19 harish_ joined #gluster
13:19 bala1 joined #gluster
13:19 dusmant joined #gluster
13:19 primechuck joined #gluster
13:22 sauce joined #gluster
13:23 ira joined #gluster
13:26 ira joined #gluster
13:27 bennyturns joined #gluster
13:31 ira joined #gluster
13:36 mjsmith2 joined #gluster
13:37 mjsmith2 joined #gluster
13:42 andreask joined #gluster
13:48 failshell joined #gluster
13:49 bala1 joined #gluster
13:49 failshel_ joined #gluster
13:50 jobewan joined #gluster
13:51 failshel_ joined #gluster
13:57 Honghui joined #gluster
14:03 Ark joined #gluster
14:05 jdarcy joined #gluster
14:09 jdarcy joined #gluster
14:14 dusmant joined #gluster
14:14 kaptk2 joined #gluster
14:20 japuzzo joined #gluster
14:31 davent joined #gluster
14:31 davent Could someone please point me towards the docs for configuring a gluster cluster using config files rather than the cli tools please?
14:31 RameshN joined #gluster
14:33 wushudoin joined #gluster
14:33 theron joined #gluster
14:35 Psi-Jack_ joined #gluster
14:36 davent left #gluster
14:37 deeville joined #gluster
14:38 deeville joined #gluster
14:38 rahulcs joined #gluster
14:39 calum_ joined #gluster
14:45 jiffe98 did jdarcy'd negative lookup plugin ever make it to production?
14:46 andreask joined #gluster
14:50 _Bryan_ joined #gluster
14:50 jdarcy Some of its functionality was supposedly in the new(er) md-cache translator, but I'm not convinced that it really has the same effect.
14:51 calum_ joined #gluster
14:54 vpshastry joined #gluster
14:54 jiffe98 gotcha
14:55 georgeh|workstat hi all, just a question regarding the rebalance command, I have a distributed replicated volume, originally 2 nodes 8 bricks, I added 2 nodes 8 bricks for total of 4 nodes 16 bricks, ran the rebalance command with force, but it did not migrate data from the existing bricks to the new ones, is there any way to do that other than scripting it myself?
14:56 MeatMuppet joined #gluster
15:03 MeatMuppet left #gluster
15:04 Humble joined #gluster
15:06 dbruhn joined #gluster
15:09 jag3773 joined #gluster
15:11 scuttle_ joined #gluster
15:11 jobewan joined #gluster
15:13 jdarcy joined #gluster
15:14 gmcwhistler joined #gluster
15:18 JustinClift joined #gluster
15:20 John_HPC joined #gluster
15:21 sauce_ joined #gluster
15:22 Psi-Jack_ joined #gluster
15:24 calum_ joined #gluster
15:28 daMaestro joined #gluster
15:28 lmickh joined #gluster
15:29 hagarth joined #gluster
15:34 theron joined #gluster
15:35 dbruhn_ joined #gluster
15:40 jiffe98 might give your translator a shot
15:46 georgeh|workstat forgot to mention, this is using v3.4.2
15:48 georgeh|workstat jiffe98, how do you mean?
15:49 jiffe98 georgeh|workstat I was talking to jdarcy about his negative lookup translator
15:49 georgeh|workstat gotcha
15:54 Honghui joined #gluster
16:08 jbd1 joined #gluster
16:11 sputnik1_ joined #gluster
16:14 edward1 joined #gluster
16:16 Licenser joined #gluster
16:18 chirino joined #gluster
16:18 glafouille hello : I set up two replicated nodes For testing I launch an iteration for writing on the volume, then I stop one node. The batch waits for 42 seconds before being able to write and the job goes on. On my gluster client I can't see the files that should have been writed during the default 42 seconds though they have been created asynchronously on the nodes. I umount then remount the volume and then all the files are there. My question is how could I
16:18 glafouille refresh the mount without unmounting it???
16:19 micu joined #gluster
16:20 Mo_ joined #gluster
16:21 glusterbot New news from newglusterbugs: [Bug 1088649] Some newly created folders have root ownership although created by unprivileged user <https://bugzilla.redhat.com/show_bug.cgi?id=1088649>
16:24 Andy6 glafouille: did you try ls -lR on the mount ?
16:24 pvh_sa joined #gluster
16:26 glafouille Andy6: no
16:26 glafouille Andy6 usually ls -l
16:27 glafouille Andy6 : let me try
16:33 JoeJulian glafouille: "nodes" is ambiguous. All writes to glusterfs volumes should be through clients. Your clients /may/ also be servers.
16:33 JoeJulian Are all your mounts fuse mounts?
16:35 SFLimey joined #gluster
16:35 Licenser joined #gluster
16:36 glafouille JoeJulian : my fuse client is also one server
16:36 JoeJulian The client that's not seeing the files, is that fuse or nfs?
16:37 glafouille only fuse
16:37 JoeJulian what version?
16:38 glafouille 3.5
16:38 glafouille I see that it is officialy in beta
16:38 JoeJulian Actually, it's officially released.
16:38 JoeJulian @latest
16:38 glusterbot JoeJulian: The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
16:38 glafouille but I just make an update with yum
16:40 JoeJulian I haven't tried that with 3.5. I wonder if they've added some new client-side directory caching. I'll have to check but it'll be in about an hour as I have some $dayjob tasks that need to get done.
16:42 rahulcs joined #gluster
16:54 cfeller joined #gluster
16:54 rjoseph joined #gluster
16:55 failshel_ joined #gluster
16:56 jbd1 joined #gluster
16:58 vpshastry1 joined #gluster
17:00 rahulcs joined #gluster
17:01 Matthaeus joined #gluster
17:01 rahulcs_ joined #gluster
17:09 failshell joined #gluster
17:12 jdarcy joined #gluster
17:13 kanagaraj joined #gluster
17:14 Matthaeus joined #gluster
17:16 theron joined #gluster
17:21 DV joined #gluster
17:25 jbd1 Can anyone here speak to performance difference between rebalance on 3.3.2 and 3.4.3 ? I need to rebalance, and I also need to upgrade-- but if I can rebalance in only three weeks after upgrading, I'll upgrade first.
17:25 mjsmith2_ joined #gluster
17:25 jbd1 (right now I would estimate that a rebalance would take 5-6 weeks)
17:26 Licenser_ joined #gluster
17:27 hagarth joined #gluster
17:35 jabrcx joined #gluster
17:38 DV joined #gluster
17:45 badone joined #gluster
17:45 zerick joined #gluster
17:46 rahulcs joined #gluster
17:47 zaitcev joined #gluster
17:47 John_HPC I'm trying to update to Gluster 3.5. I am getting this error (installing by using yumdownloader and then rpm -Uvh; since still having signature problems).
17:47 John_HPC error: %post(glusterfs-server-3.5.0-1.el5.x86_64) scriptlet failed, exit status 2
17:47 John_HPC .. /var/tmp/rpm-tmp.6425: line 46: syntax error: unexpected end of file
17:51 John_HPC Actually, it seems to have installed...
17:52 John_HPC something is wrong, as I can't restart gluster
17:54 John_HPC The EPEL5 glusterfs-server-3.5.0-1.el5.x86_64.rpm seems to be corrupt
17:58 jabrcx left #gluster
18:00 zerick joined #gluster
18:06 John_HPC Actually, using --nogpgcheck I am able to upgrade using YUM. However, CentOS5. still get an error with glusterfs-server package
18:06 larsks left #gluster
18:07 John_HPC ... /var/tmp/rpm-tmp.7409: line 46: syntax error: unexpected end of file Non-fatal POSTIN scriptlet failure in rpm package glusterfs-server-3.5.0-1.el5.x86_64 error: %post(glusterfs-server-3.5.0-1.el5.x86_64) scriptlet failed, exit status 2
18:18 DV joined #gluster
18:19 Pavid7 joined #gluster
18:21 dbruhn joined #gluster
18:23 ricky-ticky joined #gluster
18:34 DV joined #gluster
19:00 jmarley joined #gluster
19:00 jmarley joined #gluster
19:04 DV joined #gluster
19:10 kkeithley_ John_HPC: you can set gpg_check=0 in your /etc/yum.repos.d/glusterfs-epel.repo file on your el5 boxes, then you won't have to remember to use --nogpgcheck. The non-fatal error seems to be benign AFAICT; glusterfs-server is fully installed, or seems to be. I'll take a look at it when I have some spare cycles.
19:16 jag3773 joined #gluster
19:29 elico joined #gluster
19:37 theron joined #gluster
19:39 wushudoin joined #gluster
19:40 nueces joined #gluster
19:43 andreask joined #gluster
19:49 rahulcs joined #gluster
20:00 Andy5__ joined #gluster
20:09 lpabon joined #gluster
20:09 rnz joined #gluster
20:22 chirino joined #gluster
20:26 rahulcs joined #gluster
20:34 ctria joined #gluster
20:35 jag3773 joined #gluster
20:39 theron_ joined #gluster
20:42 pdrakeweb joined #gluster
20:44 nueces joined #gluster
20:46 SFLimey joined #gluster
20:54 Pavid7 joined #gluster
20:54 John_HPC Here is my log for my failed glusterd start
20:54 John_HPC http://pastebin.com/sc9QqHLp
20:54 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:55 John_HPC Sorry, here it is in paste.ubuntu
20:55 John_HPC http://paste.ubuntu.com/7309871/
20:55 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
20:56 semiosis John_HPC: could you pastie /etc/glusterfs/glusterd.vol please
20:57 John_HPC http://paste.ubuntu.com/7309882/
20:57 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
20:59 joshin joined #gluster
20:59 joshin joined #gluster
21:02 crushkil1 joined #gluster
21:02 crushkil1 he;;p
21:02 crushkil1 *hello
21:02 crushkil1 i have read almost every result for this gluster error
21:03 crushkil1 [2014-04-22 16:28:46.566664] E [client-handshake.c:1717:client_query_portmap_cbk] 0-rimagesvolume-client-0: failed to get the port number for remote subvolume
21:03 crushkil1 done some basic troubleshooting, not 100% sure what to do next... any ideas?
21:04 John_HPC sorry
21:09 John_HPC http://paste.ubuntu.com/7309952/ - this is an error I receive when opening the CLI
21:09 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
21:25 pdrakeweb joined #gluster
21:30 plarsen joined #gluster
21:52 edward3 joined #gluster
22:16 theron joined #gluster
22:25 chirino joined #gluster
22:36 ThatGraemeGuy joined #gluster
22:44 yinyin joined #gluster
23:06 uebera|| joined #gluster
23:06 uebera|| joined #gluster
23:08 bala joined #gluster
23:27 Philambdo joined #gluster
23:56 chirino joined #gluster
23:57 Andy5__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary