Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 chirino joined #gluster
00:21 chirino joined #gluster
00:23 plarsen joined #gluster
00:35 chirino joined #gluster
01:10 nangthang joined #gluster
01:11 gildub joined #gluster
01:17 gbot joined #gluster
01:18 kdhananjay joined #gluster
01:18 gbox joined #gluster
01:23 haomaiwang joined #gluster
01:27 badone joined #gluster
01:35 Lee1092 joined #gluster
01:43 MACscr|lappy joined #gluster
01:45 zhangjn joined #gluster
01:53 MACscr1 joined #gluster
01:53 harish joined #gluster
01:53 MACscr1 joined #gluster
01:54 MACscr1 joined #gluster
01:54 MACscr1 joined #gluster
01:55 MACscr joined #gluster
02:07 davidbitton joined #gluster
02:09 bowhunter joined #gluster
02:19 gbox joined #gluster
02:19 Pupeno joined #gluster
02:26 haomaiwang joined #gluster
02:30 MACscr|lappy joined #gluster
02:53 spalai joined #gluster
02:55 rcampbel3 joined #gluster
03:15 spalai left #gluster
03:15 natarej joined #gluster
03:18 davidbitton joined #gluster
03:20 Manikandan joined #gluster
03:21 overclk joined #gluster
03:22 bharata-rao joined #gluster
03:25 rafi joined #gluster
03:34 atalur joined #gluster
03:38 zhangjn joined #gluster
03:44 F2Knight joined #gluster
03:46 shubhendu joined #gluster
03:50 sakshi joined #gluster
03:51 kanagaraj joined #gluster
03:51 ashiq joined #gluster
03:54 itisravi joined #gluster
03:57 raghug joined #gluster
04:00 atinm joined #gluster
04:09 pppp joined #gluster
04:10 vmallika joined #gluster
04:18 ramteid joined #gluster
04:23 nbalacha joined #gluster
04:27 dgbaley joined #gluster
04:31 dgbaley joined #gluster
04:33 zhangjn joined #gluster
04:34 RameshN joined #gluster
04:35 nehar joined #gluster
04:41 gem joined #gluster
04:44 gbox joined #gluster
04:45 gbox Hi Happy MLKj Day in the USA (for another 15 min back East anyway)
04:45 kdhananjay joined #gluster
04:46 gbox Has anyone created libvirt VM with Centos/RHEL7 and Gluster 3.7.6 (or any similar combo)?
04:47 ndarshan joined #gluster
04:47 kdhananjay gbox: what's the matter?
04:48 gbox Docs on the web vary a lot and I have encountered several errors/warnings that seem to have come&gone since 2013.
04:49 gbox Specifically: the libvirt pool is OK but creating a volume leads to "All subvolumes are down" except there is a file on the volume
04:49 kdhananjay gbox: i think that was with libgfapi long time back.
04:49 gbox Seems like they're just debug messages: https://bugzilla.redhat.com/show_bug.cgi?id=1046259
04:49 glusterbot Bug 1046259: unspecified, medium, rc, pgurusid, CLOSED CURRENTRELEASE, Qemu-img will prompt error and even core dump when creating libgfapi image
04:50 gbox So it should be OK.  I'll proceed.
04:50 gbox kdhananjay: Yes that is a year old or so.  Just surprised to see it on a new box.
04:51 kdhananjay gbox: Yeah. I had used it last about 8 months ago and if i remember correctly, the volume did function normally.
04:51 gbox kdhananjay: Awesome, that's all I wanted to know.  Thank you!
04:52 rafi joined #gluster
04:52 kdhananjay gbox: I have a colleague who knows more about this. I will check with him and give you an update in a few hours. He is not around yet.
04:52 Saravana_ joined #gluster
04:52 Pupeno joined #gluster
05:00 gbox kdhananjay: Thanks again.  I can come back or drop you a line at RH.
05:04 gbox I must have added a peer by IP address.  The Hostname is the IP address and the "Other names:" is the hostname.
05:04 gbox Works fine but is there anyway to clean that up?
05:05 gbox Some old docs have a planned "gluster peer rename" to be implemented when time allows (ha): http://www.gluster.org/community/documentation/index.php/Features/Better_peer_identification
05:06 haomaiwa_ joined #gluster
05:12 ppai joined #gluster
05:14 itisravi joined #gluster
05:15 aravindavk joined #gluster
05:18 atinm joined #gluster
05:23 zhangjn joined #gluster
05:28 Apeksha joined #gluster
05:28 karthik_ joined #gluster
05:28 ramky joined #gluster
05:30 EinstCra_ joined #gluster
05:30 spalai joined #gluster
05:32 EinstCr__ joined #gluster
05:32 nishanth joined #gluster
05:39 Saravana_ joined #gluster
05:42 kdhananjay joined #gluster
05:42 vimal joined #gluster
05:46 atrius` joined #gluster
05:46 mlhess joined #gluster
05:47 Bhaskarakiran joined #gluster
05:55 Saravanakmr joined #gluster
05:57 hgowtham joined #gluster
05:59 hos7ein joined #gluster
06:05 kovshenin joined #gluster
06:05 jiffin joined #gluster
06:14 skoduri joined #gluster
06:14 nangthang joined #gluster
06:17 atinm joined #gluster
06:18 pppp joined #gluster
06:24 mobaer joined #gluster
06:24 17WABK2EM joined #gluster
06:26 vmallika joined #gluster
06:29 karnan joined #gluster
06:31 nishanth joined #gluster
06:33 nbalacha joined #gluster
06:37 EinstCrazy joined #gluster
06:44 javi404 joined #gluster
06:48 Manikandan joined #gluster
06:53 Pupeno joined #gluster
06:56 cvstealth joined #gluster
07:02 vimal joined #gluster
07:03 vimal joined #gluster
07:09 haomaiwa_ joined #gluster
07:10 Intensity joined #gluster
07:11 gem joined #gluster
07:13 anil joined #gluster
07:13 gowtham joined #gluster
07:15 atinm joined #gluster
07:20 karnan joined #gluster
07:22 arcolife joined #gluster
07:25 zhangjn joined #gluster
07:27 inodb joined #gluster
07:29 nbalacha joined #gluster
07:31 jtux joined #gluster
07:36 haomaiwa_ joined #gluster
07:37 DV joined #gluster
07:37 kovshenin joined #gluster
07:39 EinstCrazy joined #gluster
07:45 nangthang joined #gluster
07:57 mobaer joined #gluster
08:02 EinstCrazy joined #gluster
08:02 EinstCrazy joined #gluster
08:03 [Enrico] joined #gluster
08:06 zhangjn joined #gluster
08:07 ivan_rossi joined #gluster
08:23 arcolife joined #gluster
08:26 anil joined #gluster
08:29 mhulsman joined #gluster
08:29 inodb joined #gluster
08:34 zhangjn joined #gluster
08:35 Saravana_ joined #gluster
08:35 DV joined #gluster
08:42 liewegas joined #gluster
08:52 harish joined #gluster
08:52 zhangjn joined #gluster
09:00 DV joined #gluster
09:01 ctria joined #gluster
09:13 gem joined #gluster
09:16 Bhaskarakiran joined #gluster
09:18 glafouille joined #gluster
09:19 atinm joined #gluster
09:33 kotreshhr joined #gluster
09:36 Slashman joined #gluster
09:40 vimal joined #gluster
09:43 haomaiwa_ joined #gluster
09:43 gem joined #gluster
09:44 aravindavk joined #gluster
09:45 harish joined #gluster
09:51 inodb_ joined #gluster
09:52 d0nn1e joined #gluster
09:52 atinm joined #gluster
10:03 wnlx joined #gluster
10:03 coredump joined #gluster
10:06 vimal joined #gluster
10:08 mdavidson joined #gluster
10:10 Bhaskarakiran joined #gluster
10:28 swebb joined #gluster
10:46 b0p joined #gluster
10:47 spalai joined #gluster
10:50 dgbaley joined #gluster
10:54 spalai joined #gluster
11:02 raghug joined #gluster
11:10 spalai left #gluster
11:13 Bhaskarakiran_ joined #gluster
11:22 gem joined #gluster
11:31 Pupeno joined #gluster
11:31 Pupeno joined #gluster
11:44 kotreshhr joined #gluster
12:04 dlambrig joined #gluster
12:05 mhulsman joined #gluster
12:06 ahino joined #gluster
12:06 jockek joined #gluster
12:12 jockek joined #gluster
12:19 nbalacha joined #gluster
12:19 julim joined #gluster
12:21 muneerse joined #gluster
12:21 nangthang joined #gluster
12:21 kanagaraj joined #gluster
12:24 kanagaraj joined #gluster
12:27 mhulsman joined #gluster
12:31 RameshN joined #gluster
12:37 owlbot joined #gluster
12:52 the-me joined #gluster
13:01 gem joined #gluster
13:03 jdang joined #gluster
13:05 dlambrig joined #gluster
13:08 shubhendu joined #gluster
13:16 Bhaskarakiran_ joined #gluster
13:17 cliluw joined #gluster
13:18 unclemarc joined #gluster
13:23 RameshN joined #gluster
13:23 kotreshhr left #gluster
13:24 rwheeler joined #gluster
13:40 Pupeno joined #gluster
13:45 R0ok_ joined #gluster
13:46 EinstCrazy joined #gluster
13:49 aravindavk joined #gluster
13:51 haomaiwa_ joined #gluster
13:53 B21956 joined #gluster
13:57 dlambrig joined #gluster
13:57 Apeksha joined #gluster
13:58 karnan joined #gluster
13:59 ahino1 joined #gluster
14:00 archit_ joined #gluster
14:01 Sunghost joined #gluster
14:02 Sunghost Hello, i need some information about gluster as distributed fs. how can i identify and delete missing files in ./glusterfs?
14:02 EinstCrazy joined #gluster
14:10 EinstCrazy joined #gluster
14:15 tswartz joined #gluster
14:17 nbalacha joined #gluster
14:17 EinstCrazy joined #gluster
14:18 EinstCrazy joined #gluster
14:21 b0p joined #gluster
14:24 shaunm joined #gluster
14:25 ekuric joined #gluster
14:27 plarsen joined #gluster
14:34 chirino joined #gluster
14:41 Pupeno joined #gluster
14:46 jdang joined #gluster
14:53 jwd joined #gluster
14:58 rafi1 joined #gluster
15:00 Pupeno joined #gluster
15:09 kkeithley joined #gluster
15:15 ira joined #gluster
15:22 ahino joined #gluster
15:28 EinstCrazy joined #gluster
15:30 hamiller joined #gluster
15:38 The_Ball joined #gluster
15:44 mhulsman joined #gluster
15:46 farhorizon joined #gluster
15:49 curratore joined #gluster
15:49 bowhunter joined #gluster
15:50 chirino joined #gluster
15:50 muneerse joined #gluster
15:50 spalai joined #gluster
15:51 curratore hello, anyone could help me with a Distribute Geo-replication that I am trying to create?
15:52 B21956 left #gluster
15:54 Apeksha joined #gluster
15:54 kanagaraj joined #gluster
15:59 coredump joined #gluster
16:09 neofob joined #gluster
16:15 wushudoin joined #gluster
16:25 Sunghost no one who will help?
16:25 Sunghost i need information how to cleanup the .glusterfs folder if i copied files and dirs direct on node
16:26 Sunghost how look the hardlink if all is ok on distributed and how on direct moved files?
16:26 ahino joined #gluster
16:29 bennyturns joined #gluster
16:29 dlambrig joined #gluster
16:30 raghu joined #gluster
16:31 JoeJulian Sunghost: One doesn't generally delete missing files. ;)
16:32 muneerse joined #gluster
16:32 JoeJulian Sunghost: If a file exists in .glusterfs for which the hardlinked directory entry no longer exists, the link count will be 1.
16:32 JoeJulian Sunghost: So "find .glusterfs -type f -links 1" is probably what you're asking for.
16:33 Sunghost hi joejulian, my problem is that i must move all files from old volume direct on the node to a new volume
16:33 Sunghost ok found that on the net too, but not sure if it actually is true, and it is for distributed right? but is there not only 1 hardlink for 1file?
16:34 Sunghost so if i run this it whould also show existing files?
16:34 JoeJulian It's for all volume types.
16:34 Sunghost so if a file exit there are 2 links right? one hardlink and for the file?
16:34 JoeJulian Right.
16:35 hagarth joined #gluster
16:35 JoeJulian One for the gfid filename, and one for the original filename. Could even have more than 2, if you made additional hardlinks through the client mount.
16:35 arcolife joined #gluster
16:36 Sunghost ok i understand and if i will delete this orphan links i can run find .glusterfs -type f -links -1 -exec rm {} \;
16:38 Sunghost joe?
16:38 JoeJulian Sure.
16:39 Sunghost the problem is that the space decrease although i just direct moving files from node1 folder vol1 to new vol2
16:39 RameshN joined #gluster
16:41 spalai joined #gluster
16:42 JoeJulian I don't understand what you're doing or why that's a problem.
16:43 JoeJulian Please use ,,(glossary) terms so I can more easily understand.
16:43 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:45 ahino joined #gluster
16:46 Sunghost sory node=brick
16:46 Sunghost ok i have to copy all files direct from one brick to a new distributed volume
16:47 Sunghost my understanding is, that i dont need more free space as before, while i move the file
16:47 Sunghost the question is is that all because of the link files?
16:47 Sunghost sorry - actualy i move the files but the free space decrease
16:48 skylar joined #gluster
16:48 JoeJulian Ok, so you mount the new volume (ie. mount -t glusterfs server1:newvol /mnt/newvol) and copy everything *except* the .glusterfs directory to /mnt/newvol.
16:48 Sunghost right, sorry its a hard day with lot to do and late here in germany
16:48 Sunghost but yes thats it
16:48 Sunghost bevore i have xtb free space but now i got only xgb
16:49 Sunghost i split moving in small peaces so that the file will delete after move has finished
16:49 JoeJulian Usually that means that one or more of your bricks did not get mounted.
16:49 JoeJulian So now you're filling up root.
16:50 Sunghost oldvol is not mounted, right
16:50 Sunghost newvol is created as it should
16:50 Sunghost and all seems ok in newvol
16:50 Sunghost i think the normal "cleanup" from gluster is not working, what is logical while i move direct and not over the mountpoint
16:51 JoeJulian You cannot copy from brick to brick.
16:53 Sunghost ?! i move from brick1 local files of vol1-dir to mnt/newvol
16:54 JoeJulian "move direct and not over the mountpoint" sounded like brick to brick.
16:54 Sunghost so files should deleted from brick1 old voldir and created via gluster over mnt/newvol on brickx
16:54 JoeJulian right
16:55 Sunghost the problem is that free space before moving <> free space after moving
16:55 Sunghost and it should nearly the same
16:55 Sunghost as if i move one file from disk1/dir1 -> disk1/dir2
16:56 Sunghost is that clearer?
16:56 JoeJulian Not necessarily. If your file was sparse to start with and you don't maintain that, it will take up more space.
16:56 Sunghost thats the situation and i think that there are files in old .glusterfs which are not cleaned up
16:58 RameshN_ joined #gluster
16:58 Sunghost sorry dindt understand that - the space of an file is laying where? one time on filesystem and in gluster or how can i understand this? are the files damaged?
16:59 JoeJulian Try this. "df /; truncate --size 100G /root/delete_me; df"
16:59 JoeJulian You'll notice that df doesn't change even though you just created a 100G file.
17:00 Sunghost ok understand and read about such thing, but what did it mean, are the inodes in use?
17:00 Sunghost is it a filesystem problem?
17:01 JoeJulian There is one inode in use for that 100G file, and that's just for the stats.
17:02 JoeJulian When you "cp /root/delete_me /root/oops" is copys all 100G of 0x0 to /root/oops which will now take up an actual 100G of disk space.
17:03 Sunghost ok yes its an copy so 1:1 file with double size
17:04 Sunghost so what is the "dirty" way to get my data to new volume? or i am right, that i have to mount oldvol and newvol and move files and if files cant moved, they ar corrupt?
17:04 JoeJulian So when you "mv ${brick_path} /mnt/newvol", mv sees that as moving between filesystems. What it actually does is copy/unlink. So if your source files are sparse, the destination file will not be sparse and will, instead, have inodes full of 0s.
17:05 mobaer joined #gluster
17:05 JoeJulian If I understand correctly, you're trying to populate a new volume using the same disks. The disks are not large enough to duplicate all your data so you would like to remove files as they're copied.
17:05 Sunghost mh ok so its not moving but copy files
17:06 JoeJulian right
17:06 Sunghost ok think i understand that
17:06 JoeJulian So what I think you'll want to do is write your own cp/rm script and use the "--sparse=always" switch with cp.
17:07 Sunghost no i mount oldvol and move from oldvol to newvol, what finaly is a moving, right
17:08 Sunghost no = now
17:08 F2Knight joined #gluster
17:09 JoeJulian No. If you cross mountpoints, mv will actually cp/rm.
17:09 Sunghost ok, but my aim is reached
17:10 JoeJulian If it is, then I'm misunderstanding your problem.
17:10 Sunghost command whould be cp --sparse=always /brick/dir /mnt/newvol and it whould make at all a move job right?
17:10 Sunghost puh thats hard ;)
17:11 Sunghost in short i have to move the files from "damage" oldvol to newvol
17:11 Sunghost on same bricks in distributed
17:11 shubhendu joined #gluster
17:17 illogik joined #gluster
17:18 curratore hello, anyone could help me with a Distribute Geo-replication that I am trying to create? I have all done, master volume, slave volume, keys exported, tested through different port(10022), passed gverify.sh, disabled firewalld, free space is ok, but no working always getting the same error.
17:26 rcampbel3 joined #gluster
17:38 Rapture joined #gluster
17:48 mhulsman joined #gluster
17:57 ivan_rossi left #gluster
18:07 curratore anyone?
18:08 curratore :)
18:08 rafi joined #gluster
18:08 JoeJulian No idea. I would hope there's an error logged somewhere, but I haven't really used georep at all.
18:09 curratore there is a log error Unable to fetch slave volume details. Please check the slave cluster and slave volume.
18:09 curratore but not too much info in logs about this problem :(
18:10 curratore https://www.irccloud.com/pastebin/149ClnhH/dist-georeplica.log
18:10 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
18:12 curratore thanks anyway for answer :)
18:13 JoeJulian curratore: look for /var/log/glusterfs/create_verify_log
18:15 curratore there isn't any file called create_verify_log on /var/log/gluster :(
18:16 mobaer joined #gluster
18:16 hagarth curratore: is port 24007 on one of the slave nodes reachable from master?
18:16 curratore checking with telnet...
18:17 JoeJulian If that file doesn't exist, then running GSYNCD_PREFIX"/gverify.sh" failed.
18:18 virusuy Hi guys, i'm trying to set up features.quota-deem-statfs in my 3.7 gluster and fails with the following error : volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again .
18:18 virusuy The thing is, all the connected clients are in 3.7
18:18 curratore I launched manually it and it gave exit 0
18:19 JoeJulian if you run glusterd with debug logging you can see all the parameters passed to that script.
18:19 curratore ok I will try
18:19 JoeJulian virusuy: gluster volume set all cluster.op-version 30704
18:20 virusuy JoeJulian: still fail
18:20 virusuy this is weird
18:21 bennyturns joined #gluster
18:21 PaulCuzner joined #gluster
18:23 hagarth virusuy: what versions of clients are connected to the volume?
18:23 virusuy hagarth: all are using 3.7.1
18:24 hagarth virusuy: gluster volume status <volname> clients might also help to determine the client versions
18:25 JoeJulian And, of course, 30704 won't work. You'd have to use 30701 since you're not current.
18:25 virusuy hagarth: I ran that command but i'm only seeing the connected clients, not version
18:26 hagarth virusuy: JoeJulian is right, you would need to use 30701 as the op-version
18:26 virusuy ok, i'll try that
18:27 virusuy nope, still same error
18:27 morse joined #gluster
18:28 virusuy Oh, well, client is 3.7.1, my gluster is 3.7.6
18:28 virusuy could that be the issue ?
18:29 JoeJulian I'm guessing it is. There's been some work done on quota in 3.7. I suspect some change was made that requires the clients to have a matching op-version.
18:31 curratore I think my problem is coming from the docker container use
18:31 curratore because I am getting errors like "the Server and Client lk-version numbers are not same"
18:31 glusterbot curratore: This is normal behavior and can safely be ignored.
18:31 curratore ah
18:31 curratore :D
18:31 virusuy JoeJulian: i´ll do some testing later this evening, thanks for your help
18:34 Pintomatic_ joined #gluster
18:35 janegil joined #gluster
18:35 gbox joined #gluster
18:36 Lee1092 joined #gluster
18:36 mobaer joined #gluster
18:38 scubacuda joined #gluster
18:42 curratore hagarth: yes the 24007 is reachable from master
18:45 curratore watching the log dirs /var/log/glusterfs/geo-replication-slaves/slave.log I can see a lot of lines like this ... 0-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info [No such file or directory]
18:45 curratore inside the docker container there is not this file
18:48 spalai left #gluster
18:50 mhulsman joined #gluster
18:58 JesperA_ joined #gluster
19:15 ahino joined #gluster
19:19 virusuy JoeJulian: well, my issue with features.quota-deem-statfs it's solved, client was using 3.7.1 and nodes is 3.7.6 , after i upgrade my client to 3.7.6  deem-statfs  worked
19:24 calavera joined #gluster
19:24 JoeJulian excellent
19:24 JoeJulian Thanks for the feedback.
19:28 mobaer joined #gluster
19:32 skylar joined #gluster
19:33 ildefonso joined #gluster
19:35 farhoriz_ joined #gluster
19:45 hagarth curratore: that normally should not cause a problem
19:48 chirino joined #gluster
19:55 farhorizon joined #gluster
19:56 PsionTheory joined #gluster
20:10 farhorizon joined #gluster
20:15 sagarhani joined #gluster
20:19 bennyturns joined #gluster
20:19 farhorizon joined #gluster
20:21 rwheeler joined #gluster
20:23 Philambdo1 joined #gluster
20:39 gildub joined #gluster
20:49 calavera joined #gluster
20:50 farhorizon joined #gluster
21:00 Rapture joined #gluster
21:00 ctria joined #gluster
21:06 neofob joined #gluster
21:06 bennyturns joined #gluster
21:11 DV joined #gluster
21:23 rcampbel3 joined #gluster
21:25 dblack joined #gluster
21:30 JoeJulian curratore: Pretty sure this is your problem: https://github.com/docker/docker/issues/514
21:30 glusterbot Title: Can't use Fuse within a container · Issue #514 · docker/docker · GitHub (at github.com)
21:37 hagarth joined #gluster
21:42 dbruhn joined #gluster
22:05 farhoriz_ joined #gluster
22:09 rwheeler joined #gluster
22:13 mobaer1 joined #gluster
22:17 gildub joined #gluster
22:38 neofob left #gluster
22:54 misc joined #gluster
22:55 jockek joined #gluster
22:58 d0nn1e joined #gluster
23:09 misc joined #gluster
23:40 davidbitton joined #gluster
23:44 JoeJulian @ppa
23:44 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
23:57 ctria joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary