Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 omie888777 joined #gluster
00:39 omie888777 joined #gluster
00:45 edong23 joined #gluster
01:04 foster joined #gluster
01:23 baber joined #gluster
01:54 ilbot3 joined #gluster
01:54 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:20 kramdoss_ joined #gluster
03:04 freephile joined #gluster
03:08 freephile All my gluster files are gone - don't know how or why
03:09 freephile Can I get them back from the bricks?
03:14 freephile I guess it's just unmounted
03:24 nh2 joined #gluster
03:26 gyadav_ joined #gluster
03:46 freephile Actually, I have no idea what I'm supposed to do to recover here.  I have several GB of files in brick directories, but the mount shows nothing
03:46 freephile And gluster thinks everything is fine.
03:46 nbalacha joined #gluster
03:48 freephile Could there be a file permission problem?
03:49 freephile I chown'd everything apache:apache
03:49 freephile and set perms on all files and directories
03:52 itisravi joined #gluster
03:53 Humble joined #gluster
03:55 freephile sudo mount -a brought everything back
04:09 ppai joined #gluster
04:15 plarsen joined #gluster
04:16 dominicpg joined #gluster
04:18 jiffin joined #gluster
04:25 skylar joined #gluster
04:30 DV joined #gluster
04:58 poornima_ joined #gluster
05:00 atinm joined #gluster
05:07 skoduri joined #gluster
05:08 skumar joined #gluster
05:14 ndarshan joined #gluster
05:14 anthony25 joined #gluster
05:17 kdhananjay joined #gluster
05:18 aravindavk joined #gluster
05:24 karthik_us joined #gluster
05:30 sunnyk joined #gluster
05:33 apandey joined #gluster
05:34 omie88877777 joined #gluster
05:36 Humble joined #gluster
05:39 hgowtham joined #gluster
05:42 Prasad joined #gluster
05:43 msvbhat joined #gluster
05:54 karthik_us joined #gluster
06:05 rafi joined #gluster
06:06 Bonaparte joined #gluster
06:08 Bonaparte How can I remove gluster meta data? I removed gluster packages and removed /etc/glusterd /etc/glusterfs/ /var/log/glusterfs/ and install the packages again. When I start glusterd it remembers the metadata from old installation
06:08 Bonaparte I also killed all gluster processes before installing gluster again
06:30 sanoj joined #gluster
06:32 jtux joined #gluster
06:35 jkroon joined #gluster
06:35 MikeLupe joined #gluster
06:42 susant joined #gluster
06:44 dominicpg joined #gluster
06:46 ivan_rossi joined #gluster
06:46 ivan_rossi left #gluster
06:58 apandey joined #gluster
07:25 mbukatov joined #gluster
07:35 [diablo] joined #gluster
07:51 _KaszpiR_ joined #gluster
07:55 kdhananjay joined #gluster
07:57 cyberbootje joined #gluster
08:03 jiffin joined #gluster
08:04 fsimonce joined #gluster
08:09 msvbhat joined #gluster
08:27 apandey_ joined #gluster
08:36 apandey__ joined #gluster
08:36 marlinc joined #gluster
08:38 apandey joined #gluster
08:45 jiffin joined #gluster
08:57 pavel joined #gluster
08:57 pavel Hi there
08:58 pavel Would someone please advice, having the following problem 'touch test touch: test: No space left on device'
08:58 pavel i have two bricks one is full but second totally clear
08:59 pavel why in distributed mode it doesn't split file between bricks
09:04 jiffin pavel: glusterfs distribution works based on hashing
09:04 jiffin say u a have file 'a' which hashes to brick 'b1' in ur cluster
09:04 jiffin if there is no space in b1, then file creation fails
09:05 nbalacha pavel, if you are out of space it should try to create it in the other brick
09:06 nbalacha can you check the client logs?
09:09 portante joined #gluster
09:09 ndk_ joined #gluster
09:18 pavel nbalacha, i have two bricks within the same volume, one brick is full but here 50% of space available for volume
09:18 pavel and i can't perform write operations, inodes available as well, here two files
09:19 pavel and here some automatic import process, so it just writes to FS, and as a user i want to have it be distributed across bricks automatically
09:20 pavel at least it expected result
09:20 pavel i'm using gluster 3,8 maybe someone know if this solved in further releases ?
09:22 msvbhat joined #gluster
09:23 buvanesh_kumar joined #gluster
09:46 nbalacha pavel, it should be distributed automatically
09:47 major joined #gluster
09:47 nbalacha do you have a mount log?
09:47 nbalacha as well as gluster volume info output
09:49 [fre] guys... This may be a silly question....
09:50 [fre] we updated our gluster-infra to one of the latest RHs... being glusterfs-server-3.8.4-18.4.el7rhgs.x86_64
09:50 kkeithley joined #gluster
09:50 [fre] What is the supposed op-version we should use?
09:51 cloph the latest your cluster supports
09:51 cloph gluster volume get all cluster.max-op-version
09:51 [fre] that's only relevant for gluster 3.10 and higher, I think, no?
09:52 [fre] # gluster volume get data cluster.op-version
09:52 [fre] Option                                  Value
09:52 [fre] ------                                  -----
09:52 [fre] cluster.op-version                      30712
09:52 glusterbot [fre]: ----'s karma is now -7
09:52 glusterbot [fre]: ---'s karma is now -4
09:52 [fre] ooops
09:54 Bonaparte Somebody needs to fix the regexes in the bot :)
09:54 [fre] so, if rpm with 3.8.4-18 is installed everywhere, must I set the op-version to "30804" ?
09:55 [fre] Bonaparte, -7... that's not really a score I like, indeed.
09:57 cloph freephile: that is relevant for all gluster versions - you're limited to using features that are supported by that max version.
09:57 cloph so you don't /have/ to bump it when you upgrade, but you likely should.
09:58 cloph if you just  do bugfix  releases, then the version typically isn't bumped anyway, but if you  go from maj.min to maj.min+x, then it's worth bumping IMHO.
09:58 cloph and no, the version is not necessarily like that.
09:59 cloph that's why there's the max version check to see what would be possible.
09:59 [fre] ok... max-version is not an existing variable...
09:59 cloph if you want to know before actually updating  all peers, you'd need to look in the code
09:59 cloph the max-value would be what your cluster would support.
09:59 [fre] all peers are updated to the same major-versions...
10:00 cloph so you're currently set to 30712, and since you apparently did update to  3.8, the version could be bumped (but why update to  3.8 and not to 3.10 or 3.12 / the LTE versions?
10:01 [fre] because they are not offered as such by RH.
10:03 [fre] is there any benefit performance-wise? consequences? advantages? or what is it exactly used for?
10:05 cloph nah, just the .10 and 12 versions are the longer-supporter versions, while the  others  just a "fillers"/for those wanting the latest-and-greatest - but if using distro-versions, esp. RHEL, then of course that doesn't apply/they have their own update policies.
10:06 cloph and of course newer versions have newer features and bugfixes that might not have been  backported yet.
10:06 cloph (or  won't be at all, since the versions then no longer are maintained when the new version came out).
10:06 cloph Esp. if you're using geo-replication, then 3.12 might have a crucial fix re symlinks (that I e.g. need, so I bumped to 3.12 right away after it was released)
10:08 cloph https://www.gluster.org/release-schedule/#release-status → 3.8 became EOL once 3.12 was released (i.e. no spport/new versions anymore from gluster.org - but RHEL of course  can still offer fixes/updates)
10:08 glusterbot Title: Gluster » Release Schedule (at www.gluster.org)
10:08 [fre] but ... wait... setting the op-version does not do anything either?
10:09 cloph [fre]: setting the  version allows gluster to use new features that were introduced - whether it makes a difference for you depends on whether you want to enable those features/those new features affect the volume types you're using atm) → check release notes for changes.
10:12 g_work joined #gluster
10:13 g_work hi there, can somebody tell what happen if the .gluster directory is erased on a single brick configuration volume
10:13 g_work is all data lost ?
10:13 g_work is it recoverable ?
10:16 cloph gluster  volume information lost, but not all  data - there should be your files left in the  hierarchy next to the .gluster directory
10:16 g_work ok
10:16 cloph and recoverable in the sense that you  can create a new volume and copy back that data to the volume.
10:17 g_work ok ok
10:19 g_work but if do a "gluster volume set 'myvolume" with the old volume name ? will that erase remaining datas ? (I really don't want that obviously)
10:19 cloph g_work: move the old data out of the way and create a new brick directory - don't risk of having old extended  attributes mess up stuff..
10:20 g_work (... i'm new to gluster and must fix some problems left by previous sysadmin)
10:20 g_work ok
10:20 nbalacha pavel, susant pointed out that creates can fail if the brick is so full we cannot even create the linkto file
10:21 nbalacha pavel, can you paste the error messages here or in a pm if you prefer
10:23 pavel nbalacha, which log? brick log?
10:23 pavel [2017-09-18 09:29:55.999768] E [MSGID: 113022] [posix.c:1358:posix_mknod] 0-vol_5b9d0a6175cb8abcffee37b8e0977213-posix: mknod on /var/lib/heketi/mounts/vg_237bd760341b4b966974281fa94bf0a5/brick_19eb8a82e035f2770eea1c1ad53754fa/brick/test failed [No space left on device] [2017-09-18 09:29:55.999836] E [MSGID: 115057] [server-rpc-fops.c:542:server_mknod_cbk] 0-vol_5b9d0a6175cb8abcffee37b8e0977213-server: 74651238: MKNOD /test (00000000
10:23 pavel it doesn't tries to use second brick of this volume
10:23 nbalacha pavel, that is because it cannot create the linkto file
10:23 nbalacha that is what the mknod is doing
10:24 pavel yeah i know, but here second brick available
10:24 nbalacha pavel, dht doesnt work like that
10:24 nbalacha it needs the pointer file
10:24 nbalacha is there a file on this brick you can delete? Or copy if off the volume and back into it from the mount point
10:24 pavel okay, but if i need to upload 500GB file to volume with two brick by 300Gb
10:24 [fre] cloph, just to be sure... changing the op-version does not impact any running actions in the cluster, right?
10:25 pavel volume size is shown as 600GB
10:25 nbalacha pavel, that is not supported in a dht volume
10:25 nbalacha the size of the file is limited by the brick
10:25 nbalacha files are not split
10:25 pavel any changes in further versions?
10:26 nbalacha pavel, sharding is expected to help there but we currently support only VM use cases
10:26 nbalacha with shared volumes
10:26 nbalacha *with sharded volumes
10:26 pavel i'm actually trying to import vm image to glusterfs folume
10:26 cloph [fre]: don't know for sure - it might trigger reconnect to peers
10:26 pavel volume*
10:27 nbalacha pavel, I would suggest raising this in the gluster-users mailing list
10:27 nbalacha I dont see the sharding dev on irc but she will be able to answer your questions
10:27 pavel i raised a bug on bugzilla, and you're right, i going to send this thru mail list
10:28 pavel thanks:) much appreciating
10:28 [fre] cloph, cool... done it. cool it's done on one machine for the whole cluster. that's nice.
10:28 [fre] thought machines were going to complain if running different op-versions... but it's very global.
10:30 rwheeler joined #gluster
10:34 WebertRLZ joined #gluster
10:36 Bonaparte joined #gluster
10:39 baber joined #gluster
10:44 baber joined #gluster
10:54 sunkumar joined #gluster
11:06 sunnyk joined #gluster
11:24 MrAbaddon joined #gluster
11:31 MrAbaddon joined #gluster
11:45 susant joined #gluster
11:45 kdhananjay joined #gluster
11:51 atinm joined #gluster
12:09 gyadav_ joined #gluster
12:20 atinm joined #gluster
12:28 skumar joined #gluster
12:29 nh2 joined #gluster
12:33 skumar_ joined #gluster
12:47 baber joined #gluster
12:59 jstrunk joined #gluster
13:02 sunkumar joined #gluster
13:03 sunkumar joined #gluster
13:05 vbellur joined #gluster
13:06 vbellur1 joined #gluster
13:09 ivan_rossi joined #gluster
13:11 g_work hey there, could someone help me with this please : I've delted .gluster fs in one folder and mountpoint was then not accisble ... then i've copied datas from gluster brick (to be sure to prevent any loss) ... and then I made a gluster volume myvolume stop and start and was able to mount the folder again ... but it's empty ... could someone give me help to reach my datas again ?
13:12 g_work (I was previously not allowed to remount mountpoint and got error 'endpoint not present' or something like that)
13:12 g_work now this is ok but datas are still present in gluster brick and not shown in mointpoint ... :/
13:12 g_work (excuse me for my english in advance)
13:14 ivan_rossi left #gluster
13:16 g_work maybe cloph ? :X
13:18 cloph gluster fuse doesn't support remount, so that bit is unrelated - and as written earlier: copy the files from your backup location back to the volume.
13:18 dominicpg joined #gluster
13:18 cloph there is no way to have your data magically reappear.
13:19 g_work but, my data is still present in the brick which is located at /glusterfs/backup and my mointpoint at /run/glusterfs/backup doens't show anything
13:19 g_work is that normal ?
13:20 cloph yes, as you smashed it's brain to pieces (removing the .glusterfs dir)
13:20 cloph and feel free to do your way, and not as suggested, but then don't  complain about inconsistencies or other problems down the road.
13:22 g_work I don't complain ... I just try to understand and don't get further in hellness ... :)
13:23 g_work so if I understand well ... I now copy the datas I took from the brick this morning to the mountpoint and I won't have double in the brick ?
13:23 g_work double datas*
13:24 cloph you don't complain *now* - but you wrote  you left the data on the brick/did reuse that  directory to reinit the volume.
13:25 g_work I should have made a 'gluster volume delete'  before ?
13:25 g_work and manually delete files from the brick ?
13:25 skylar joined #gluster
13:25 g_work to not reuse it ?
13:26 cloph 12:24 <cloph> g_work: move the old data out of the way and create a new brick directory - don't risk of having old extended  attributes mess up stuff..
13:26 cloph that was my suggestion....
13:28 g_work I understand  ... but my goal is also to keep same mountpoint once all of this will be fix ... so, if i'mright, I know have to delete the brick, dlete files in brick manually if it's not done and then recreate brick, eventually with other name and just mount my new brick under old mountpoint
13:28 g_work right ?
13:34 cloph the mountpoint is completely independent, has no relation where your brick directory is (or on what server the gluster volume is run)
13:36 cloph and no need to *delete* the brick directory, after all that is where your data still lives - but I would either  rename that directory if you insist on keeping the same directory name for the brick's volume, or just create another directory and use that  as brick directory.
13:37 vbellur joined #gluster
13:38 vbellur joined #gluster
13:38 vbellur joined #gluster
13:39 vbellur joined #gluster
13:42 nbalacha joined #gluster
13:44 alvinstarr joined #gluster
13:45 vbellur joined #gluster
13:46 plarsen joined #gluster
13:57 ahino joined #gluster
14:17 buvanesh_kumar joined #gluster
14:19 hmamtora joined #gluster
14:26 vbellur joined #gluster
14:41 msvbhat joined #gluster
14:46 rafi joined #gluster
14:50 kpease joined #gluster
14:54 farhorizon joined #gluster
14:55 buvanesh_kumar joined #gluster
14:56 bwerthmann joined #gluster
15:09 wushudoin joined #gluster
15:18 baber joined #gluster
15:20 vbellur joined #gluster
15:40 gyadav_ joined #gluster
15:42 ahino joined #gluster
16:15 jiffin joined #gluster
16:19 baber joined #gluster
16:20 gyadav_ joined #gluster
16:22 nh2 joined #gluster
16:36 Vapez joined #gluster
16:36 Vapez joined #gluster
16:48 MrAbaddon joined #gluster
16:55 jkroon joined #gluster
16:57 MrAbaddon joined #gluster
17:09 baber joined #gluster
17:10 armyriad joined #gluster
17:19 msvbhat joined #gluster
17:23 farhorizon joined #gluster
17:24 rafi1 joined #gluster
17:25 nh2 joined #gluster
17:40 rafi joined #gluster
17:46 jbrooks joined #gluster
17:46 Wayke91 joined #gluster
18:00 rastar joined #gluster
18:07 _KaszpiR_ joined #gluster
18:18 baber joined #gluster
18:22 jiffin joined #gluster
18:28 kotreshhr joined #gluster
18:44 jiffin joined #gluster
18:46 nh2 joined #gluster
18:47 hvisage joined #gluster
18:48 vbellur joined #gluster
18:48 anthony25 joined #gluster
19:00 baber joined #gluster
19:04 vbellur joined #gluster
19:11 vbellur joined #gluster
19:26 ron-slc joined #gluster
19:29 kotreshhr joined #gluster
20:05 baber joined #gluster
20:08 cliluw joined #gluster
20:17 skylar joined #gluster
20:19 cliluw joined #gluster
20:22 Vapez joined #gluster
20:44 zcalusic joined #gluster
20:54 snehring joined #gluster
21:02 jobewan joined #gluster
21:02 zcalusic joined #gluster
21:12 msvbhat joined #gluster
21:15 Xetrov` joined #gluster
21:16 dijuremo2 joined #gluster
21:16 zcalusic joined #gluster
21:19 dijuremo2 I have a two node + arbiter setup and need to replace the backend HDDs. Have 12 drives nearing 5 years of use. We have 24 new drives and I was planning to remove the brick from one server, remove and replace all drives. Create the raid6 with the 24 drives and add the brick back to the volume. Does this sound like a good procedure?
21:20 dijuremo2 I have this volume mounted on both servers and use ctdb and samba as a file server. Removing the brick should not affect the mounted gluster file system, right, the other server should keep hosting the data, corrrect?
21:32 farhorizon joined #gluster
21:39 dijuremo2 Oh and I have a single brick per server. Currently each brick is made out of 12 2TB drives in raid 6.
21:41 dijuremo2 https://thepasteb.in/p/Mjhx477A7ZvsV
21:41 glusterbot Title: ThePasteBin - For all your pasting needs! (at thepasteb.in)
21:44 JoeJulian dijuremo2: I wouldn't "remove" the brick. I would just kill the glusterfsd process for that brick. Do my backend work, then "gluster volume start $volname force" to start the brick again. The volume will begin its self-heal and you're off and running.
21:48 dijuremo2 Brick 10.0.1.6:/bricks/hdds/brick           49153     0          Y       2161
21:48 dijuremo2 So simply: kill 2161  ?
21:48 dijuremo2 That is the brick I want to replace...
21:49 dijuremo2 Since I will be going from 12 2TB drives to 24 4TB drives, and this is a file server with samba, are there any recommendations on file system to use?
21:49 dijuremo2 I believe I had read some people recommended ext4 over xfs for this type of load, is that really truth?
21:50 JoeJulian Yes, that's the kill statement. There's no benefit to ext4 over xfs, not in about 12 years.
21:51 JoeJulian There's still a lot of FUD going around about the historical differences, but it's been disproven time and time again.
21:51 JoeJulian I prefer the MTTR of xfs, personally.
21:52 dijuremo2 Is it worth settings xfs options during formatting? When I originally created this volume I used options to set stride size to 10 given I had 12 drives raid 6, so should I also do all that or not?
21:53 JoeJulian It will detect your raid configuration automatically.
21:56 dijuremo2 I had been following from the RHEL guides for Gluster storage:
21:57 dijuremo2 https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Configuring_Red_Hat_Storage_for_Enhancing_Performance.html#Hardware_RAID
21:57 glusterbot Title: Chapter 13. Configuring Red Hat Gluster Storage for Enhancing Performance (at access.redhat.com)
21:58 dijuremo2 mkfs.xfs other_options -d su=stripe_unit_size,sw=stripe_width_in_number_of_disks device
21:58 dijuremo2 mkfs.xfs other_options -d su=stripe_unit_size,sw=stripe_width_in_number_of_disks device
21:59 dijuremo2 Are you certain I can skip that and simply reformat the new raid 6 array?
22:00 dijuremo2 mkfs.xfs /dev/proper-brick-backend ?
22:01 Wayke91 left #gluster
22:11 kotreshhr left #gluster
22:14 msvbhat joined #gluster
22:15 vbellur joined #gluster
22:18 vbellur joined #gluster
22:19 vbellur joined #gluster
22:19 JoeJulian dijuremo2: Sorry, meeting. That's what the xfs documentation says and the code appears to agree.
22:19 vbellur joined #gluster
22:20 vbellur joined #gluster
22:21 vbellur joined #gluster
22:21 vbellur joined #gluster
22:22 vbellur1 joined #gluster
22:24 farhoriz_ joined #gluster
22:33 nh2 joined #gluster
22:47 decayofmind joined #gluster
22:48 Wizek_ joined #gluster
23:04 bwerthmann joined #gluster
23:20 dijuremo2 JoeJulian: All seems OK, data is being replicated now... Thanks a lot!
23:36 vbellur joined #gluster
23:37 vbellur1 joined #gluster
23:39 dtrainor joined #gluster
23:40 dijuremo2 JoeJulian: The replication speed does not seem to be great. Using iotop, I see writes on the upgraded disk in the 17MB/s to 35MB/s range though it hovers closer to 20MB/s Is there a way to speed up replicaiton?
23:42 dtrainor Hi.  I blew up a distributed-replicate volume again, on the same host.  I used http://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick, "Replacing brick in Replicate/Distributed Replicate volumes".  The instructions worked until "gluster volume replace-brick ..." but auto healing did not start as I expected it to.  Trying to heal tells me "Launching heal operation to perform index self heal on volume s
23:42 dtrainor low_gv00 has been unsuccessful on bricks that are down. Please check if all brick processes are running".  That brick is, in fact, not running per 'gluster volume status slow_gv00', and I'm not sure how to proceed.
23:42 glusterbot Title: Managing Volumes - Gluster Docs (at docs.gluster.org)
23:42 JoeJulian dijuremo2: Not sure, but maybe "gluster volume heal $vol full"? My guess is that it's healing files as they're referenced.
23:43 dijuremo2 Will that mess up with the ongoing heal?
23:43 JoeJulian dijuremo2: no
23:44 dijuremo2 ]# gluster volume heal export full
23:44 dijuremo2 Launching heal operation to perform full self heal on volume export has been successful
23:44 dijuremo2 Use heal info commands to check status
23:44 JoeJulian dijuremo2: :+1:
23:44 JoeJulian ~pastestatus | dtrainor
23:44 glusterbot dtrainor: Please paste the output of gluster peer status from more than one server to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
23:45 vbellur joined #gluster
23:46 dtrainor I just have the one server with 4 drives in it, in the distributed-replicate volume, local bricks, no other peers.  'gluster peer status' just tells me 0 peers
23:46 JoeJulian Meh, that's also not the trigger I thought I was hitting. Not multitasking well this afternoon.
23:47 JoeJulian I was hoping for "gluster volume status"
23:47 dtrainor haha
23:47 dtrainor https://paste.fedoraproject.org/paste/ppR~eU612rp1ZALhe6DmTQ
23:47 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
23:48 vbellur1 joined #gluster
23:48 JoeJulian is /gluster/bricks/slow/slow_v00_b01-new/data the new brick?
23:48 dtrainor it is, yes.
23:48 JoeJulian Is the old one still in there?
23:48 JoeJulian in the volume, I mean.
23:49 dtrainor nope, it went away when I used replace-brick
23:49 decayofmind joined #gluster
23:49 JoeJulian Good. Are you sure /gluster/bricks/slow/slow_v00_b01-new/data has your storage mounted?
23:49 dtrainor i don't see any rogue glusterfs processes running for the old brick either
23:50 dtrainor yep, mounted:  /dev/sdf on /gluster/bricks/slow/slow_v00_b01-new type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
23:50 JoeJulian getfattr -m . -d -e hex /gluster/bricks/slow/slow_v00_b01-new/data
23:50 dtrainor i can read/write to/from it, it works as i'd expect a brick to work at this point
23:51 JoeJulian Once you get those xattrs, do the same for another brick path. Ensure the volume-id attribute matches.
23:51 dtrainor https://paste.fedoraproject.org/paste/-~B1xUK5t8FPy1-udmq~MA
23:51 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
23:51 dtrainor ah, that sounds familiar
23:52 dtrainor trusted.glusterfs.volume-id matches between this new brick and a known working brick (which was neither the old brick nor the brick replacing it)
23:52 JoeJulian We're checking to make sure the brick is actually valid for that volume.
23:52 dtrainor understood
23:53 JoeJulian Ok then, gluster volume start slow_gv00 force
23:53 JoeJulian If it works, great. If it dies again we'll probably want to check the brick log.
23:54 dtrainor ok. it's thinkin'.
23:55 vbellur joined #gluster
23:57 dtrainor hmm... failed with "Error: Request timed out".  I tried to stop the volume, with the intention of starting it back up with force, and got:  "volume stop: slow_gv00: failed: Another transaction is in progress for slow_gv00. Please try again after sometime."
23:57 JoeJulian dtrainor: restart glusterd

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary