Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 volga629 what port is should be  0-management: connection attempt on  failed, (Connection refused) ?
00:07 volga629 [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed what possible to do for this error ?
00:08 CyrilPeponnet @JoeJulian somehow disabling root-squash or geo-repo or both fix the issue with the client in 3.6.4 hanging on some files.... I don't explain it...
00:09 gildub joined #gluster
00:10 JoeJulian \o/
00:12 JoeJulian volga629: ignore rdma errors (unless you're trying to use rdma)
00:13 volga629 no I am not tryin
00:13 volga629 trying
00:13 CyrilPeponnet is migrating from 3.5.2 to 3.6.4 is still a good idea ?
00:13 CyrilPeponnet we planed this in 2 weeks hoping this will help
00:14 JoeJulian yes
00:14 CyrilPeponnet good. Thanks to you @JoeJulian as usual you provide great help :)
00:14 JoeJulian Glad you were able to get it figured out.
00:21 volga629 daemon keeps dying
00:21 volga629 some SSL error
00:22 volga629 E [socket.c:384:ssl_setup_connection] 0-management: SSL connect error
00:22 volga629 E [glusterd-utils.c:181:glusterd_unlock] 0-management: Cluster lock not held!
00:22 volga629 but exactly happened
00:25 nzero joined #gluster
00:30 dgandhi joined #gluster
00:32 shyam joined #gluster
00:36 shyam left #gluster
00:43 MrAbaddon joined #gluster
00:46 nzero joined #gluster
00:48 Pupeno joined #gluster
00:48 JoeJulian volga629: run "glusterd -d | nc termbin.com 9999" and paste me the link.
00:48 Pupeno joined #gluster
00:49 volga629 glusterd: invalid option -- 'd'
00:49 JoeJulian --debug
00:50 JoeJulian There are probably volfiles missing from that segfault. Worst case, rename them back from the .rpmsave.
00:51 prg3 joined #gluster
00:52 aezy left #gluster
00:53 volga629 I can't start daemon both sides, I generated certs twice and both sides contain certs of each other
00:56 JoeJulian volga629: I see your problem but I've never seen it before. Is this a production server or can you wipe your configuration and create the peers and volume all over again?
00:57 volga629 yes I can it not production yet
00:57 JoeJulian Either that or come back tomorrow when I can properly diagnose the problem. Right now I have to leave to take my daughter to a girl scout meeting.
00:58 volga629 I see thanks for the help
00:58 JoeJulian I'll be back around 9am GMT-7.
00:58 volga629 I will try back up /var/lib/glusterd and I will start over again
01:02 theron joined #gluster
01:02 codex joined #gluster
01:03 rehunted joined #gluster
01:06 oxidane joined #gluster
01:13 MrAbaddon joined #gluster
01:17 cyberswat joined #gluster
01:18 victori joined #gluster
01:30 gildub joined #gluster
01:31 plarsen joined #gluster
01:35 nzero joined #gluster
01:48 PaulCuzner joined #gluster
01:48 Lee1092 joined #gluster
01:53 dgbaley joined #gluster
01:53 kshlm joined #gluster
01:54 dgbaley Hey, if I do a vol heal full, what happens? Is there first a check to see what needs to be healed, and then the healing? If so, how can I know when the check is done?
01:56 dgbaley Would something like "heal info split-brain" ever show issues while plain "heal info" is silent?
02:00 kshlm joined #gluster
02:00 haomaiwang joined #gluster
02:01 haomaiwang joined #gluster
02:10 harish joined #gluster
02:11 sankarshan joined #gluster
02:14 7JTAAGXOT joined #gluster
02:27 shubhendu joined #gluster
02:28 _Bryan_ joined #gluster
02:32 kevein joined #gluster
02:36 hagarth joined #gluster
02:39 nangthang joined #gluster
02:40 nangthang joined #gluster
02:47 jatone left #gluster
02:49 Pupeno joined #gluster
02:59 johnmark joined #gluster
03:01 haomaiwa_ joined #gluster
03:23 shubhendu joined #gluster
03:28 atinm joined #gluster
03:37 overclk joined #gluster
03:38 atinm joined #gluster
03:41 sakshi joined #gluster
03:42 [7] joined #gluster
03:43 atinm joined #gluster
03:44 meghanam joined #gluster
03:46 apahim joined #gluster
03:51 bharata-rao joined #gluster
03:55 ira joined #gluster
04:02 haomaiwa_ joined #gluster
04:03 itisravi joined #gluster
04:13 yazhini joined #gluster
04:20 gem joined #gluster
04:21 gem joined #gluster
04:22 ppai joined #gluster
04:23 kanagaraj joined #gluster
04:25 dgbaley joined #gluster
04:25 neha joined #gluster
04:28 jwd joined #gluster
04:29 ramky joined #gluster
04:31 nangthang joined #gluster
04:31 jwaibel joined #gluster
04:34 kshlm joined #gluster
04:36 rjoseph joined #gluster
04:37 RameshN joined #gluster
04:41 troble joined #gluster
04:49 Pupeno joined #gluster
04:56 ndarshan joined #gluster
04:57 topshare joined #gluster
04:58 ndarshan joined #gluster
05:01 haomaiwa_ joined #gluster
05:06 atinm joined #gluster
05:07 SOLDIERz joined #gluster
05:08 TvL2386 joined #gluster
05:08 deepakcs joined #gluster
05:10 Manikandan joined #gluster
05:10 ashiq joined #gluster
05:11 elico joined #gluster
05:13 hgowtham joined #gluster
05:16 rafi joined #gluster
05:16 jcastill1 joined #gluster
05:19 kotreshhr joined #gluster
05:19 nbalacha joined #gluster
05:21 jcastillo joined #gluster
05:22 atalur joined #gluster
05:23 meghanam joined #gluster
05:24 karnan joined #gluster
05:25 nzero joined #gluster
05:30 vmallika joined #gluster
05:31 maveric_amitc_ joined #gluster
05:32 anil joined #gluster
05:32 vimal joined #gluster
05:33 ira joined #gluster
05:42 aravindavk joined #gluster
05:45 nishanth joined #gluster
05:46 jwd joined #gluster
05:46 kdhananjay joined #gluster
05:47 jwaibel joined #gluster
05:53 jordie joined #gluster
05:55 raghu joined #gluster
06:00 patnarciso_ joined #gluster
06:02 haomaiwa_ joined #gluster
06:07 spalai joined #gluster
06:26 RameshN joined #gluster
06:28 spalai1 joined #gluster
06:30 autoditac_ joined #gluster
06:31 pk1 joined #gluster
06:31 ramky joined #gluster
06:31 Saravana_ joined #gluster
06:33 poornimag joined #gluster
06:39 sakshi joined #gluster
06:40 aravindavk joined #gluster
06:44 jordie joined #gluster
06:46 aravindavk joined #gluster
06:50 Pupeno joined #gluster
06:50 jordie One problem,two node with replica,the throughput is half of ordinary file server,how to improve it?
06:52 kovshenin joined #gluster
06:58 doekia joined #gluster
07:01 nangthang joined #gluster
07:01 spalai joined #gluster
07:02 haomaiwa_ joined #gluster
07:05 mband joined #gluster
07:09 m0zes joined #gluster
07:14 gletessier joined #gluster
07:15 Saravana_ joined #gluster
07:23 rehunted joined #gluster
07:24 dastar joined #gluster
07:25 meghanam joined #gluster
07:26 javi404 joined #gluster
07:34 mband Hi, how do I restore/reassemble a distributed glusterfs 3.5 storage? I reinstalled the servers and thought I could easily say use these bricks to form this volume again, or create a new volume and just add the bricks with their data being intact.
07:34 LebedevRI joined #gluster
07:34 jordie joined #gluster
07:36 fsimonce joined #gluster
07:37 Philambdo1 joined #gluster
07:42 nbalacha joined #gluster
07:42 deepakcs joined #gluster
07:42 bharata-rao joined #gluster
07:42 RedW joined #gluster
07:42 nage joined #gluster
07:42 Lee- joined #gluster
07:42 portante joined #gluster
07:42 xaeth joined #gluster
07:42 neoice joined #gluster
07:42 JPaul joined #gluster
07:42 _fortis joined #gluster
07:42 DJClean joined #gluster
07:42 RobertLaptop joined #gluster
07:44 autoditac_ joined #gluster
07:45 natgeorg joined #gluster
07:45 Slashman joined #gluster
07:46 Lee_ joined #gluster
07:47 neoice joined #gluster
07:49 RedW joined #gluster
07:51 DJClean joined #gluster
07:51 xaeth joined #gluster
07:52 RobertLaptop joined #gluster
07:52 JPaul joined #gluster
07:52 portante joined #gluster
07:54 _fortis joined #gluster
07:54 dastar joined #gluster
07:58 nbalacha joined #gluster
07:58 deepakcs joined #gluster
07:59 bharata-rao joined #gluster
08:00 topshare joined #gluster
08:02 haomaiwa_ joined #gluster
08:19 muneerse joined #gluster
08:25 Pupeno joined #gluster
08:26 TvL2386 joined #gluster
08:27 mband If I create a new volume using force (the bricks belong to an old volume), will the data on the bricks then stay intact and just become available in the new volume?
08:36 ekuric joined #gluster
08:38 atinm mband, you won't be able to create a volume out of a brick which is already part of another volume
08:41 mband oh... Is there a way I can then get a volume up and running with the current brick data? I did a reinstall (still got the old system on an accessible partition), and want to provide access to the distributed (non-striped) data.
08:42 mband or well systems (around 10 machines)*
08:42 kotreshhr joined #gluster
08:45 atalur joined #gluster
08:46 meghanam joined #gluster
08:52 Romeor mband, copypaste.
08:53 harish joined #gluster
08:55 mband Would do it, if it wasn't because a user has over 15TB data and I don't have the resources to get extra harddrives - else it would be the "easy" bruteforce way to do it :)
09:01 haomaiwa_ joined #gluster
09:04 kshlm joined #gluster
09:05 maveric_amitc_ joined #gluster
09:09 Romeor mband, seems like you're f*cked up
09:09 Romeor i'm not a dev, but I don't see any possibility to restore the cluter
09:10 Romeor mby some workaround there is
09:10 Romeor ask JoeJulian
09:10 Romeor or ndevos
09:11 Romeor i'm sure there is a way though
09:11 Romeor you just have to clear all those gluster attritubtes
09:12 Romeor may be it is enough to just move data to another folder on the same partition and create a new cluster, never tried
09:12 drankis joined #gluster
09:13 Romeor oh yes
09:13 Romeor JoeJulian, already gave us some light on this
09:14 Romeor https://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
09:14 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
09:14 troble left #gluster
09:14 Romeor this _should_ work
09:14 * Romeor crosses his fingers saying this
09:14 jwd joined #gluster
09:15 jwd joined #gluster
09:16 no2me joined #gluster
09:16 Saravana_ joined #gluster
09:16 Romeor mband, you should give it a try
09:17 no2me Hey guys I did currently setup glusterfs with 2 bricks however I can't write to the mounted point, but I can write to the brick itself the error cannot create directory `1': Invalid argument
09:18 Romeor no2me, hi. mount options pls
09:19 no2me mount -t glusterfs glusterfs.slave1:/myvolume /var/mydir/
09:19 no2me (ignore the hostname please) I know it's not a master slave setup
09:19 mband nice, will try it out - was about to ask about just deleting the meta data folder.
09:23 no2me Romeor I am not using fstab yet
09:25 MrAbaddon joined #gluster
09:26 * Romeor got no ideas
09:27 Romeor gluster version?
09:27 Romeor volume setup?
09:27 deniszh joined #gluster
09:29 no2me @Romeor glusterfs 3.2.7 built and volume setup http://pastebin.com/1w5JYzEr
09:30 Romeor 3.2.7 ? are you sure?
09:31 no2me glusterd -V
09:31 Romeor and you're sure it is not 3.7.2 ?
09:32 Romeor 3.2.7 is extremely outdated
09:32 Romeor use something decent
09:32 no2me whaha sorry
09:32 no2me let me update
09:32 no2me glusterfs 3.2.7 built on Sep 28 2013 18:15:18 yep
09:32 no2me outdated
09:32 Romeor use 3.6 or 3.7 series
09:33 no2me forgot the update the os after reinstalling
09:33 no2me my bad
09:34 Romeor happens
09:37 gildub joined #gluster
09:40 nsoffer joined #gluster
09:41 poornimag joined #gluster
09:45 rjoseph joined #gluster
09:46 atinm joined #gluster
09:48 Manikandan joined #gluster
09:51 atalur joined #gluster
09:51 ajames-41678 joined #gluster
09:52 Saravana_ joined #gluster
09:53 nishanth joined #gluster
09:53 ndarshan joined #gluster
09:54 PaulCuzner joined #gluster
09:57 kotreshhr joined #gluster
09:59 anil joined #gluster
10:00 sakshi joined #gluster
10:02 haomaiwang joined #gluster
10:06 Manikandan joined #gluster
10:11 anil_ joined #gluster
10:13 zerick_ joined #gluster
10:14 delhage joined #gluster
10:15 frakt joined #gluster
10:15 obnox joined #gluster
10:26 apahim joined #gluster
10:33 ctria joined #gluster
10:37 anonymus joined #gluster
10:37 anonymus hi guys
10:37 anonymus please help me
10:38 elico joined #gluster
10:38 anonymus I need to fsck a disk which is a brick of distributed-replicated volume
10:38 anonymus what should I do?
10:39 anonymus stop volume on the node; umount  brick; check the disk and then start volume again?
10:44 nsoffer joined #gluster
10:45 anonymus please anyone
10:47 itisravi anonymus: if the brick has a corresponding replica on another node, you don't need to stop the volume. Just kill this brick process, umount, fsck, mount and gluster volume start force.
10:49 anonymus itisravi: i seems like it has replice
10:49 anonymus itisravi: it seems like it has replice
10:49 anonymus itisravi: it seems like it has replica
10:50 firemanxbr joined #gluster
10:50 anonymus so I don't need to 'gluster volume remove-brick distr-repl-sdd1 be7:/sdd1 start'
10:50 anonymus ?
10:51 itisravi anonymus: remove brick will remove the brick from the volume configuration for ever. Is that what you want?
10:51 anonymus no!
10:51 anonymus I want just to check the disk
10:51 itisravi anonymus: then what I said earlier should be okay.
10:51 anonymus maybe replace by another one
10:52 itisravi anonymus: okay
10:52 anonymus itisravi: so I have to ps axu| grep gluster | grep <volname>? and kill the corresponding processes?
10:53 anonymus Brick be7:/sdd1                        49152    Y    921827
10:53 anonymus or kill -9 921827?
10:53 anonymus >gluster volume status distr-repl-sdd1
10:53 anonymus Brick be7:/sdd1                        49152    Y    921827
10:53 ajames-41678 joined #gluster
10:54 itisravi anonymus: yes kill 921827
10:54 anonymus ok. thank you very much!
10:54 itisravi anonymus: welcome :)
10:55 anonymus itisravi: it seems to start again
10:56 anonymus oh sorry, no
10:59 sakshi joined #gluster
10:59 anonymus seems like the disk died at all
10:59 anonymus :(
11:02 no2me did we lost support for debian wheezy?
11:02 haomaiwa_ joined #gluster
11:03 no2me can't get the new version 3.4 and higher
11:06 gem joined #gluster
11:06 anonymus that's a pity
11:06 anonymus I've got debian :(
11:07 jrm16020 joined #gluster
11:08 ndarshan joined #gluster
11:11 atinm joined #gluster
11:11 anonymus itisravi: one more question please
11:12 shubhendu joined #gluster
11:12 itisravi anonymus: shoot
11:12 anil_ joined #gluster
11:12 muneerse2 joined #gluster
11:13 anonymus the mentioned disk seems to be dead. do I have to add brick into the volume after replacing the disk
11:13 anonymus ?
11:14 itisravi anonymus: you can do a "gluster vol replace-brick <volname> <old-brick> <new-brick> commit force" followed by a 'gluster vol heal <volname> full`
11:14 itisravi anonymus: are you running 3.4?
11:14 spalai left #gluster
11:16 anonymus glusterfsd --version                                                                                                                                                                                                             ~
11:16 anonymus glusterfs 3.5.2 built on Aug 20 2014 13:10:47
11:16 anonymus 3/5
11:16 anonymus 3.5
11:16 itisravi anonymus: okay
11:17 anonymus so I create new brick on the be7 node; then heal volume; then replace?
11:18 itisravi create, replace and heal.
11:18 anonymus aha
11:18 anonymus I got. thank you!
11:18 itisravi anonymus: the heal command will sync data from the healthy replica  brick to this newly replaced brick.
11:19 anonymus I got.
11:19 itisravi cool.
11:20 ChrisNBlum joined #gluster
11:23 itisravi anonymus: before replacing, can you ensure that 'gluster vol heal <volname> info` shows zero-entries?
11:26 anonymus itisravi:http://pastebin.com/EdUmxhAC here is status
11:27 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:30 itisravi anonymus: It is bettet that you follow a procedure listed here http://review.gluster.org/#/c/8503/3/doc/adm​in-guide/en-US/markdown/admin_replace_brick.md
11:31 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:31 itisravi anonymus: see line #102 onwards.
11:31 anonymus thank you, itisravi. I'll read it
11:31 Romeor no2me, here?
11:31 no2me yes
11:32 no2me I am
11:32 Romeor not dropped d7
11:32 Romeor i use d7 with 3.6 line
11:32 Romeor just add glusterfs proxmox directly to apt
11:32 Romeor damn
11:32 Romeor just add glusterfs rep
11:32 Romeor to apt
11:32 Romeor :d
11:32 no2me not the official?
11:32 kkeithley1 joined #gluster
11:33 no2me ahh fount it
11:33 no2me http://download.gluster.org/pub/glust​er/glusterfs/3.6/3.6.4/Debian/wheezy/ :S
11:33 glusterbot Title: Index of /pub/gluster/glusterfs/3.6/3.6.4/Debian/wheezy (at download.gluster.org)
11:34 Romeor yes. use LATEST to get the latest release
11:34 Romeor every time
11:34 Romeor it is official
11:34 Romeor BUT
11:35 Romeor there is a BIG BUT!
11:35 Romeor packages are being maintained by community members
11:35 Romeor and there is one pretty shitty bug after upgrade to 3.6, don't know if it is fixed
11:35 Romeor i'll give you a link
11:36 doekia joined #gluster
11:37 Romeor known BUG from 3.6.3 !? Please FIX .deb packages!
11:37 Romeor https://bugzilla.redhat.co​m/show_bug.cgi?id=1191176
11:37 Romeor had to run glusterd --xlator-option *.upgrade=on -N
11:37 glusterbot Bug 1191176: urgent, unspecified, ---, bugs, ON_QA , Since 3.6.2: failed to get the 'volume file' from server
11:38 Iodun joined #gluster
11:38 Romeor just after upgrade run the latest command
11:38 Romeor had to run glusterd --xlator-option *.upgrade=on -N
11:38 Romeor it will fix everything
11:39 Romeor this should be ran during upgrade process, but some1 forgot to add it :)
11:39 Romeor was pretty big surprise for me (i'm running glusterfs in production)
11:40 Romeor i thought my hair will go gray
11:44 dlambrig_ joined #gluster
11:46 no2me @Romeor thanks package has been updated :)
11:46 no2me lets try now to remake and mount
11:46 Romeor don't forget to update client also
11:53 anil joined #gluster
11:55 no2me they are both updated grrr...
11:55 no2me Setting extended attributes failed, reason: Operation not permitted. :S
11:55 no2me something is really wrong here
11:56 gem joined #gluster
12:00 nbalacha joined #gluster
12:01 jvandewege_ joined #gluster
12:09 ira joined #gluster
12:11 meghanam joined #gluster
12:12 jrm16020 joined #gluster
12:13 unclemarc joined #gluster
12:15 kotreshhr left #gluster
12:19 kdhananjay joined #gluster
12:23 mband Romeor: finally got around to clearing the xattrs and (re)moving the meta data, and creating a new volume with the old bricks - it appears to work as intended, thanks.   It would be nice if the blog post could be more visible, but I don't know where a link to it could be added (or the information be written into the glusterfs documentation).
12:28 itisravi joined #gluster
12:31 kshlm joined #gluster
12:34 haomaiwa_ joined #gluster
12:40 harish joined #gluster
12:44 xavih joined #gluster
12:44 malevolent joined #gluster
12:51 gem joined #gluster
12:54 kdhananjay joined #gluster
12:56 ajames-41678 joined #gluster
12:59 daMaestro joined #gluster
13:00 theron joined #gluster
13:02 haomaiwa_ joined #gluster
13:03 B21956 joined #gluster
13:09 chirino joined #gluster
13:11 balu1 joined #gluster
13:12 poornimag joined #gluster
13:13 balu1 Hi there. I have got a question for experience in samba projects. For me I would like to envolve a samba share with active directory authentication. Now this should do its work over 2 or more servers. Does anyone have experience for the tdb or tdb2 backend shared with cluster for consistant sync of the user id mapping?
13:13 shyam joined #gluster
13:14 rwheeler joined #gluster
13:16 SeerKan joined #gluster
13:16 mpietersen joined #gluster
13:18 mpietersen joined #gluster
13:20 mpietersen joined #gluster
13:31 haomaiwa_ joined #gluster
13:32 aaronott joined #gluster
13:43 Trefex joined #gluster
13:46 julim joined #gluster
13:54 nzero joined #gluster
13:57 bennyturns joined #gluster
13:57 theron joined #gluster
14:02 haomaiwa_ joined #gluster
14:05 balu1 FYI: samba dev says "no". actually it is not a safe way to replicate a tdb database over n* nodes
14:11 no2me joined #gluster
14:13 no2me is it true that Glusterfs doesn't really like OpenVZ?
14:15 elico joined #gluster
14:17 Iodun joined #gluster
14:19 Twistedgrim joined #gluster
14:21 jwd joined #gluster
14:24 dgandhi joined #gluster
14:26 kkeithley_ no2me: huh?
14:32 deepakcs joined #gluster
14:35 shubhendu joined #gluster
14:37 B21956 joined #gluster
14:42 nzero joined #gluster
14:44 _Bryan_ joined #gluster
14:49 B21956 joined #gluster
14:51 Trefex do i have to change the epel repo to upgrage gluster from 3.6 to 3.7?
14:51 kshlm joined #gluster
15:00 Trefex also, is this not a bug? http://download.gluster.org/pub/gluster/glus​terfs/3.7/LATEST/CentOS/glusterfs-epel.repo
15:00 Trefex shouldn't the baseurl include the version number in this case?
15:00 Trefex and not point to LATEST only?
15:02 haomaiwa_ joined #gluster
15:07 squizzi joined #gluster
15:11 shyam joined #gluster
15:13 balu1 left #gluster
15:15 wushudoin joined #gluster
15:18 _maserati joined #gluster
15:24 no2me could someone help me understanding the followin error logs? http://pastebin.com/Riq8JKec
15:24 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:25 no2me sorry let me put that log again
15:25 no2me http://fpaste.org/254809/94795201/
15:25 glusterbot Title: #254809 Fedora Project Pastebin (at fpaste.org)
15:51 cholcombe joined #gluster
15:54 Trefex how long does a fix-layout take on 3 nodes with 130 TB each ?
15:54 Trefex millions of files
15:56 nzero joined #gluster
15:57 nzero joined #gluster
16:03 plarsen joined #gluster
16:04 victori joined #gluster
16:09 Norky joined #gluster
16:14 gletessier_ joined #gluster
16:16 cholcombe Trefex, you're prob looking at days easily
16:16 Trefex cholcombe: omg ok
16:16 cholcombe Trefex, just a guess
16:16 Trefex cholcombe: didn't know it would take that long to add a new node :)
16:16 cholcombe Trefex, well i know a heal would take pretty long when you're up into the millions area
16:17 cholcombe Trefex, which gluster version?  3.6 has multithreading improvements that should help
16:17 timotheus1 joined #gluster
16:19 Trefex cholcombe: 3.7.3 now in fact
16:19 cholcombe oh ok.  it'll prob be much faster than
16:19 Trefex upgraded earlier today :)
16:20 cholcombe cool
16:20 Trefex yeah i can see 4-5 processe using 18% CPU or something
16:20 cholcombe not bad
16:20 Trefex and 300k/s io or something
16:21 cholcombe haha
16:21 cholcombe wow
16:21 Trefex slow? :(
16:21 cholcombe no sounds good to me
16:22 Trefex cool, hopefully it will work this time, on 3.6.3 the relayout never finished, got some error, which we thought is linked to a bug that should be fixed in 3.7, so fingers crossed
16:29 CyrilPeponnet joined #gluster
16:30 CyrilPeponnet joined #gluster
16:30 CyrilPeponnet joined #gluster
16:31 CyrilPeponnet joined #gluster
16:31 CyrilPeponnet joined #gluster
16:36 ghenry joined #gluster
16:39 RameshN joined #gluster
16:41 chirino joined #gluster
16:48 vimal joined #gluster
16:48 uebera|| joined #gluster
16:54 jdossey joined #gluster
16:56 calavera joined #gluster
16:57 muneerse joined #gluster
16:59 rafi joined #gluster
17:01 mckaymatt joined #gluster
17:06 victori joined #gluster
17:08 trav408 joined #gluster
17:08 gem joined #gluster
17:10 gem joined #gluster
17:12 RameshN joined #gluster
17:12 mckaymatt Hi. Is there an easy way to find out which ports need to be opened for a gluster server running  libgfapi using TCP
17:13 JoeJulian @ports
17:13 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
17:16 mckaymatt Oh thanks
17:17 nzero joined #gluster
17:17 calisto joined #gluster
17:21 RameshN_ joined #gluster
17:22 bfoster joined #gluster
17:33 RameshN__ joined #gluster
17:46 volga629 Hello
17:46 glusterbot volga629: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:46 jobewan joined #gluster
17:46 volga629 back with the issue from yesterday where can't initiate ssl between 2 nodes
17:50 gem joined #gluster
17:51 RameshN_ joined #gluster
17:51 deniszh joined #gluster
17:52 JoeJulian Hey there volga629. So where did you end up?
17:55 wushudoin| joined #gluster
17:56 volga629 tried remove /var/lib/glusterd, but didn't helped
17:59 JoeJulian You removed it from both servers and glusterd still would not start?
18:00 wushudoin| joined #gluster
18:04 tanuck joined #gluster
18:05 calavera joined #gluster
18:09 volga629 yes both server and gluster keep crashing
18:09 nzero joined #gluster
18:12 JoeJulian volga629: Which distro is this again?
18:14 volga629 fedora21 server x86_64
18:15 peter__ joined #gluster
18:15 deniszh joined #gluster
18:17 JoeJulian Mmkay, I'll have to spin up a new VM to check that. All my fedora are upgraded to 22.
18:19 its-peter I have a nightly database backup being copied to a Gluster volume that is then geo-replicated to another DC and I'm having an issue where the file appears to be copied as what looks like a temp file.  The filename on the geo-rep slave begins with a . and ends with a sequence of 6 characters but has the same size as the original file.  I could sure use some suggestions on how to straighten...
18:19 its-peter ...this out. Thanks in advance.
18:19 shaunm joined #gluster
18:20 ctria joined #gluster
18:20 JoeJulian its-peter: Sounds like an rsync tempfile. If you're using rsync - try --inplace.
18:21 its-peter I will, thanks!
18:41 jobewan joined #gluster
18:59 gem joined #gluster
19:05 MrAbaddon joined #gluster
19:16 mckaymatt joined #gluster
19:34 dlambrig_ joined #gluster
19:38 mator joined #gluster
19:46 trav408 I have 2 servers running gluster. 1 server had to be replaced so I am trying to figure out the best way to restore it. Is there a good doc or tutorial to look at on this subject?
19:57 calavera joined #gluster
19:58 nzero joined #gluster
19:59 gem joined #gluster
20:21 oytun joined #gluster
20:25 oytun Is anybody here? I've been getting (No data available) failures since yesterday when I try to connect a new client. I am about to give up... Any experience with client connectivity problems?
20:25 jbautista- joined #gluster
20:26 JoeJulian 254 people in here.
20:26 JoeJulian Well, 253. glusterbot is... of course... a bot.
20:26 JoeJulian And there's a couple of logging bots...
20:26 JoeJulian but you get the idea.
20:27 oytun :)
20:27 oytun right...
20:29 JoeJulian Anyway... Check the client log for clues. Share with a ,,(paste) tool if you need another pair of eyes.
20:29 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:31 oytun I have been checking them since yesteday
20:31 oytun And at the moment, this is the log block that keeps repeating:
20:31 oytun http://pastebin.com/3C3C1fdV
20:31 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:31 oytun http://fpaste.org/254909/97906143/
20:31 glusterbot Title: #254909 Fedora Project Pastebin (at fpaste.org)
20:33 oytun - I can reach the port via telnet.
20:33 oytun - Servers are hosted on AWS EC2. Security groups are fine. No iptable rules.
20:33 Pupeno joined #gluster
20:34 oytun - I am also pasting volume configs. They are almost globally open.
20:34 oytun Volume config: http://fpaste.org/254911/49806814/
20:34 glusterbot Title: #254911 Fedora Project Pastebin (at fpaste.org)
20:34 oytun Same in both peers
20:35 oytun There are other 2 clients which can reach properly.
20:35 JoeJulian Depending on your flavor (I forget what amazon call that) it may be blocked connections from privileged ports. That's been encountered by others.
20:35 oytun I can't make this third server access glusterfs.
20:35 oytun Security group, I assume? They are all open for requests from this server.
20:35 oytun I can also access via telnet
20:36 JoeJulian Right, but telnet doesn't originate from a port <= 1024.
20:37 oytun Hmm
20:37 JoeJulian nc -p 1000 $server 24007
20:38 jdossey joined #gluster
20:39 dgbaley joined #gluster
20:40 JoeJulian oytun: https://botbot.me/freenode/gluster/​2014-04-03/?msg=12921427&amp;page=8
20:40 glusterbot Title: IRC Logs for #gluster | BotBot.me [o__o] (at botbot.me)
20:41 oytun still waiting :(
20:41 JoeJulian Maybe a bad test. it's not verbose.
20:43 oytun and sometimes it connects, but the files are broken
20:44 oytun or the connection.
20:44 jbautista- joined #gluster
20:44 oytun ??????????    ? ?      ?         ?            ? README
20:44 MarceloLeandro_ joined #gluster
20:44 oytun ??????????    ? ?      ?         ?            ? README
20:44 oytun kind of lists appear.
20:48 papamoose1 joined #gluster
20:55 mckaymatt joined #gluster
20:59 JoeJulian Might make sense... depends on what the logs say.
21:09 TheCthulhu1 joined #gluster
21:22 oytun Tried getting the server out of amazon VPC, then into the same network with gluster servers. none of them worked... :(
21:25 nzero joined #gluster
21:32 nangthang joined #gluster
21:38 badone__ joined #gluster
21:39 shaunm joined #gluster
21:42 mckaymatt joined #gluster
21:52 daMaestro joined #gluster
22:06 corretico joined #gluster
22:27 nzero joined #gluster
22:32 julim joined #gluster
22:35 gildub joined #gluster
22:43 plarsen joined #gluster
22:45 calavera joined #gluster
22:55 theron joined #gluster
22:59 theron joined #gluster
23:14 shyam joined #gluster
23:22 shyam joined #gluster
23:25 Pupeno_ joined #gluster
23:34 mckaymatt joined #gluster
23:58 julim joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary