Logbooks Lab Maintenance Evaporator_1 Evaporator_2 Laser cutter Target Production Test-Stand RH-ISAC RH-Cyclotron RH-Meson Hall RH-Beamlines RH-ARIEL
  RH-ISAC, Page 4 of 135  Not logged in ELOG logo
Entry  Monday, September 23, 2024, 11:31, David Wang, South Hot-Cell, Development, TM3, , 10 pins lectrical check result after insulator is installed on water cap C.  

10 pins lectrical check result after insulator is installed on water cap C.

pin 14 to 60kv bias has 20 Kohm @250V Megga check. The rest check on ohmmeter and megga @250 are either OLor over 25Mohm which are good.

Entry  Monday, September 23, 2024, 11:26, David Wang, South Hot-Cell, Standard Operation, TM3, , TM3 window line has been leak checked.  

TM3 window line has been leak checked after containment box removal and installation for insulator installation on water cup C.   No leak.

Entry  Wednesday, September 18, 2024, 13:47, David Wang, South Hot-Cell, Standard Operation, TM3, TiC#8, Tm3 has been moved from silo to SHC. 

Tm3 has been moved  from silo to SHC. The move is smooth.

Entry  Wednesday, September 18, 2024, 13:40, Aaron Tam, South Hot-Cell, Standard Operation, TM3, TiC#8, Target removal  13x

 TiC#8 HP FEBIAD removed from TM3

Target sitting in SHC

1 Coil Conductor bolt fell under source tray 

Photo Dump uploaded

Entry  Tuesday, September 17, 2024, 13:42, Frank Song, ITW, Standard Operation, TM4, TiC#9 , Module Connection TM4_with_TiC#9_connection_in_ITW_2024-09-17.pdf

 TM4 with target TiC#9 is connected in ITW. See attachment.

Entry  Tuesday, September 17, 2024, 11:16, David Wang, ITW, Standard Operation, TM4, TiC#9, TM4 has been moved from TCS to ITW. 

TM4 has been moved from TCS to ITW. The move is smooth.

Entry  Monday, September 16, 2024, 08:48, chad fisher, Manipulator Maintenance, Maintenance, , , CRL Technicain Manipulator Maitenance Trip Triumf_Document_2024-09-20_102936.pdf

Over the course of September 9th - September 13, 2024 a CRL manipulator technician carried out the following manipulator maintenance:

NHC operator side arm #9763 (right side installed position) - Inspection and re-tensioning of tong cable prior to re-installation (was a little too tight). Operational.

E-HD clean spare #8015 - repaired stuck manual 'Z' motion - removed one rouge 5-40 screw that had been dropped inside remote end during 'Z' motion cable run through in 2017, found that related 5-40 screws from the cable end clamp were not tight and therefore proud and also jamming. Complete tape replacement and tensioning. This manipulator is fully back to operational status other then needing some screw for the air restriction on the remote end.

E-HD "dirty" spare #7985 - inspection and evaluation of parts on the operator side require to return to operational status. Wrist and handle reassembled (all parts of wrist and handle accounted for) Ball and socket joint at and of motion lock replaced (parts were on hand already) as well as tape motion lock assembly (parts on hand already). This arm may be missing a relay or two. Remote side still bagged with unrecorded issue (suspect it is only a broken tong cable) this will be put in the maintenance schedule for earl 2025.

NHC operator side "spare" arm #9351 - Damaged 'Z' motion tape replaced. Tensioning. Operational once 'Y' motion linear actuator re-installed.

Meson hall warm cell manipulators evaluated/inspected - No issue found

Entry  Monday, September 16, 2024, 08:23, chad fisher, North Hot-Cell, Repair, , , Model N Remote end #9352 reapir 

On Friday, September 6, 2024 Model 'N' remote arm #9352 was removed from the NHC right side installed position to investigate  an ongoing issue with the tong motion circuit (no motion on the remote side when handle on operator side operated).

Inspection revealed that the tong cable on this arm have hopped its cable drum and rapped around the shaft of the cable drum. As well, the tong cable had hopped a lower pulley and was jammed between the pulley and casting of the pulley housing. These issues were corrected and arm#9352 was reinstalled onto the right hand position. This repair resolved an ongoing tong motion system stemming from November 2023 described below.

initial tong motion problem from November 2023 when the tong system jammed, operator side arm #9763, remote arm #9342.

Remote arm #9342 was removed and replaced with arm #9352 as it was suspected there was a hopped cable on #9342. Upon removal it was confirmed the tong cable on remote arm #9342 had indeed hopped the pulley. After installation of the "spare" remote arm #9352 that hAd not been installed since previous servicing and should have been completely operational, it was found the tong system still did not work properly.

Upon thorough inspection of operator arm # 9763 it was found that the tong cable was prematurely reaching the end of its travel on its drum causing the inability to activate the circuit. The tong cable was reset on its drum as per manufacture's specs with one full cable rap on the drum and one full pretension wrap of the tong drum spring.

Although this resolved part of the problem and allowed a full squeeze of the operator handle it did not result in transfer of motion through the system to the remote end.

Further investigation continued, separating the operator side arm from the seal tube and testing the seal tube and operator side arm independently. Both systems worked fine independently, pointing to a potential problem between the seal tube and spring loaded coupler on the remote arm.

Remote arm #9352 was removed and couplers visually inspected and end of seal tube on the hot side was inspected and found that the pin in the end of the shaft on the tong circuit was not installed symmetrically which could cause uneven pressure on the spring loaded coupler of the remote arm. This pin was adjusted and all pins on the hot side of the seal tube were dimpled and loctite applied as a precautionary measure.

This still did not remedy the tong motion problem. These led to the September 6, 2024 removal and repair of remote arm #9352. This arm had previously been serviced by a CRL technician in December of 2017 and had not been installed or used since and should have been in operational status. It is still not clear why it was not.

 

 

Entry  Thursday, September 12, 2024, 14:12, Adam Newsome, Waste Package/Ship, Development, , SiC#44, Failed waste emplacement at CNL: Flask #22, Pail 275 (SiC#44) Report_on_Inspection_of_Pail_275_on_2024-10-03.pdf

On 2024-09-12, CNL reported difficulty with lowering Pail 275 from F-308 flask #22 into their tile hole. The four other pails associated with this shipment were successfully emplaced. It is suspected that the issue was related to a kink in the lowering cable, causing a jam during lowering. This was observed approximately 1 year ago.

Discussions with CNL to learn more about the root case are ongoing, and this e-log will be updated when more information is learned. It is expected that this flask will be shipped back to TRIUMF and the pail will need to be repacked.

 

Update 2024-09-12: email from A. Swan at CNL: "4 of the 5 flask were emplaced successfully with F308 #22 SiC#44 having an unsuccessful pull test and left within the flask."
This is the extent of the information they have provided and is not sufficiently helpful to determine what the issue is.

 

Updated 2024-10-03: an investigation was performed (work permit C2024-09-26-10). It was determined that the root cause of the issue is attributed to an assembly error - the cable was incorrectly routed through the holder part. See attached report for full details.

 

Entry  Wednesday, September 11, 2024, 13:38, Adam Newsome, North Hot-Cell, Repair, , , Spare Manipulator - Y-axis motion not functioning 

Update 2024-09-12: C. Fisher confirms this is actually normal behaviour. The Y-axis motion is inhibited when X and Z are in a certain configuration, which was true when the manipulator was mounted on the wall in storage position. Non-issue. A note about this has been made in the operator manual.

 

It was observed today that for the spare manipulator (Model N, Serial 9351), the Y-axis motion was not functioning. X motion and Z motion are functioning as expected. Note that the Y-axis linear actuator was recently replaced and successfully tested and has not been used since the replacement (see e-log 2422).

The following information was determined from troubleshooting:

  • X motion functioning correctly
  • Z motion functioning correctly
  • Y motion selection functioning correctly - the indicator light on the operator control box lights up to confirm Y is selected, and the relay on the main control board which selects the Y axis toggles, as expected
  • Y motion motor contactors do not function as expected when toggling in either direction. Note that these contactors are shared in common with the other two motion axes, so they are functioning correctly and presumably the motor controller itself is as well, since the other axes work.
  • Most likely, because of the above reasons, there is an issue with the inhibit signal that is specific to the Y axis. The inhibit signal which runs from the main control board to the motor control board is shared in common with all motion axes (same for the two direction select signals). It is suspected (not yet confirmed), that the microcontroller is not outputting the inhibit signal as it should be. This is not due to a limit switch related issue because the Y axis does not have limits. Troubleshooting this has been difficult because the schematics do not show the full extent of the circuitry.

Further troubleshooting steps:

  1. Check inhibit signal functionality - is it working for Y axis?
  2. Check direction control signal functionality - are they working for Y axis?
  3. Swap motor select wires to "trick" the microcontroller into thinking it is running a different motor, to further isolate the issue.
  4. If required, swap the entire main control board with another manipulator's, to see if the issue is related to the circuitry which is not described in the schematics.

At the time, this issue is not deemed critical because this is a spare unit.

 

Entry  Wednesday, September 11, 2024, 07:59, David Wang, Conditioning Station, Standard Operation, TM4, TiC#9, TM4 has been moved from SHC to TCS. 

TM4 has been moved from SHC to TCS. The move is smooth.

Entry  Tuesday, September 10, 2024, 12:41, Frank Song, South Hot-Cell, Standard Operation, TM4, TiC#9 , electrical check/leak check TM4_with_TiC#9_Surface_electric(leak)_cheak_on_SHC_2024-09-10.pdf

 TM4 with new target TiC#9 Surface installed in SHC was leak/electric checked before moving to TCS. See attached pic pls.

Entry  Tuesday, September 10, 2024, 12:41, Aaron Tam, South Hot-Cell, Standard Operation, TM4, TiC#9, Target TiC#9 HP-SIS installed onto TM4 089c5872-091d-484b-9b9c-dda8444b217b.jfif566cae27-5562-4934-b23a-5c99af810900.jfifcc853ae8-5896-46f3-9f21-f6ede8c82626.jfifffdf443c-2656-499a-9d0a-01ddef2f954a.jfif

September 9, 2024 - Aaron Tam 

  • Water from waterline found puddled on source-tray and containment box. 
  • Water was wiped up as much as possible, but likely still in containment box 
  • leaving to dry overnight
  • Target inserted into Hotcell with nuts for the parasitic target. 4/4 socket nuts installed on upstream flange, and 3/4 installed on downstream flange

September 10, 2024 - Aaron Tam

  • TiC#9 HP-SIS installed onto TM4 
  • Leak check passed after retightening VCR joint
  • Target Electrode Conductor screws torqued to 130 inch-lbs 
  • Electrical test passed 
  • Containment box cover reinstalled

Photos attached

Entry  Friday, September 06, 2024, 14:28, Aaron Tam, South Hot-Cell, Standard Operation, , Ta#68(Spare), Ta#68(Spare) removed from TM4 and into Anteroom  

September 5, 2024 -AT

  • Plastic and clean target tray added to SHC 
  • TM4 moved to SHC 
  • TM4 containment box cover removed

September 6, 2024 - AT

  • Ta#68(Spare) removed from TM4 onto clean tray 
    • top left mounting bolt fell under source tray in containment box - unretrievable
  • Ta#68(Spare) moved out of SHC via toolport door 
    • SHC lift table was lower than tool port tunnel by 2". Target-on-Tray slipped back onto lift table when trying to drag it up to the tool port. Target bounced out of one locating pin, but stayed within boundaries of the tray 
    • Water also leaked out from the VCR fitting and on to the SHC lift table, and toolport. 
  • Target-on-tray slid into a clean plastic bag, and then into a plastic box. 

 September 9, 2024 - AT

  • Ta#68(Spare) taken out of plastic bag and water drained from waterlines
    • Target was rotated in every direction several times until no more water poured out from the VCR joints
  • Ta#68(Spare) put back in box without the bag (to promote evaporation) 

 

Entry  Friday, September 06, 2024, 10:01, David Wang, South Hot-Cell, Standard Operation, TM4, Ta#67 back up. , TM4 has been moved from SMP to SHC. 

TM4 has been moved from SMP to SHC. The move is smooth.

Entry  Thursday, September 05, 2024, 12:50, Adam Newsome, Crane, Maintenance, , , Overhead crane: up/down hoist delayed start issue 

D. Wang reports that recently the overhead crane has been exhibiting a delayed start (approx. 1 min) when operating in local mode. This issue applies to the main hoist's up/down functionality only and does not seem to apply to other motion axes. T. Kauss briefly investigated but did not find any obvious cause.

This issue will be monitored and investigated over the following weeks and this log will be updated as more information becomes available.

Suggested troubleshooting steps:

  • Isolate the issue to local or remote mode to confirm this suspicion (it appears to be present in both local and remote mode)
  • With help from someone locally operating the crane, confirm at the receiver in the control room whether signals are coming through for up/down commands immediately, or whether the receipt of signals itself is delayed. At the same time, confirm whether the PLC input card is receiving the command signals from the receiver immediately or in a delayed manner. (it appears that the command signals are being received by the VFDs - the issue seems to be on the output side)
  • Time the delay, and repeat to confirm if the timing is consistent every time as reported by D. Wang (it appears timing is inconsistent. After a weekend of no use, it was approximately 2 minutes. After a few hours of no use, it was around 10 seconds).
  • Check if delay is present across various crane positions in the target hall (completed by D. Wang - result: yes)
  • Inspect controls hardware in the cabinet in the control room, as well as the remote IO on top of the crane, for obvious issues (checked cabinet but not remote IO - nothing obvious)
  • Go online with the PLC and test up/down commands to see if the program indicates any obvious issues (note: this may not actually help - seems to be an electrical issue isolated to the VFD-related electronics)
  • Mechanical inspection of the motor (not likely an issue)
  • Disconnect motor, repeat test of trying to command up/down motion and see if the motor itself had any effect on the delay

Update, 2024-09-09 [DW]:  Confirmed both local and remote mode have the same problem on delayed main hoist functional issue after crane was switched on. The delay time is 1 minutes 40 seconds to 2 minutes. It happens mostly when first time the crane was switched on. But if the crane was not used in the rest of day after first time switching on , the problem showed again 5 to 6 hours after in same day.

Update, 2024-09-09 [AN]: Checked again around 11:30am... tested running both hoists A and B in remote mode. Upon first attempt to lower hoist, no motion occurred. Hoist A VFD exhibited fault code 51, and Hoist B exhibited fault code 52. Both hoists appeared to receive the command from the PLC to attempt to move. Both hoists (initially) had their "ready" status as ON. When attempting to move, however, hoist B's ready status dropped out. Note also that the delay observed between the failed attempt to start, and when motion was actually possible, was only approximately 10-20 seconds. Perhaps this correlates to the fact the crane was recently operated this morning. It is suspected that the charging circuit for hoist B's DC bus voltage is faulty.
Tomorrow, another test will be performed by checking A and B independently to see if one can run but the other cannot.

 

Update, 2024-09-10 [AN]: Tested the crane using only Hoist A: working upon first power-up of the day. Tested using only Hoist B: not working, fault 52 re-appears. We are certain the issue lies with Hoist B. Furthermore, upon observing the motor contactors and status LEDs when attempting to energize the motor, the following is observed:

  • For Hoist A (normal, working operation) - on power up K1 toggles on (in). When pressing down button, K7 toggles (in). The hoist begins to move.
  • For Hoist B (non-functioning), on power up K1 toggles on (in). When pressing down button, K7 does NOT toggle. K1 toggles OFF (it should not) then comes back, then triggers the fault.

Upon further inspection it was noted that for K1 for Hoist B, there appears to be a snubber (XEB2202) wired in across the A1 and A2 terminals of the contactor. The fact that a time-dependent circuit is involved matches earlier theories about a charge-timing related issue. Suggested action: attempt to remove the snubber and test again to determine if the issue persists.

 

Update, 2024-09-11 [AN]: Under work permit I2024-09-10--3, the following was tested and observed:

  • Run Hoist B to confirm it is not working (expected behaviour) - confirmed
  • Power off crane, remove the snubber from Hoist B's contactor K1
  • Power on crane, attempt to run Hoist B - the hoist did not run
  • It was noticed also that Hoist A did in fact have a snubber installed as well - it was hidden. The snubber for Hoist B's capacitance was measured and confirmed to match what it should be, so it is suspected that it is working fine.
  • This indicates that the snubber is not the issue. The snubber was reinstalled and the system returned back to normal state. Tested - working.

At this time the root cause remains unknown, but the snubber has been eliminated from the possibilities.

It appears that the issue can be isolated to the fact that contactor K1 momentary toggles off when attempting to operate the hoist. This short blip would explain the fault code related to insufficient line voltage. The drawings indicate the only way that K1 can turn off is if the 48 VAC supply from the transformer drops out momentarily, or if an unnamed relay located (presumably) in the VFD momentary toggles.

Further troubleshooting steps could include:

  • Probe for 48VAC at A2 terminal of K1 and attempt to operate the hoist - see if it drops off briefly. If so, the issue is either the transformer or the relay contact in the VFD. Perform continuity check across the relay contact, repeat attempt to operate hoist, and determine if it is the relay contact causing the issue. This test will significantly isolate the issue.
  • Check inside Hoist B's VFD circuitry and measure the DC bus voltage during attempted operation, and compare to Hoist A. There may be an issue with the charging circuit inside the inverter.

 

Update, 2024-09-13 [AN]: Under work permit I2024-09-10--3, the following was tested and observed:

  • Probe the A2 terminal of K1 for Hoist B with respect to the transformer's 0V output upon initial power-up of the system: ~53VAC measured (should be 48VAC but 53 is acceptable)... this confirms that the appropriate relay coil voltage is present upon power-on, as expected because it is observed that the relay toggles upon power-up.
  • Continue probing A2 while attempting to jog Hoist B down, with min-hold set on the multimeter to check for voltage drops: the voltage measured was approximately 24VAC during one attempt and 36VAC during another. This implies that K1's coil voltage does in fact drop out instantaneously, resulting in K1 very briefly disengaging which causes the observed VFD vault. The fact that the measured min voltage is different can be attributed to the mulitmeter's sampling rate, catching the voltage during its decline towards 0.
  • Because of the aforementioned test results, it is confirmed that there is an issue associated with K1's coil voltage briefly dropping out when attempting to run the hoist. There are only two reasons this could happen: 1) the transformer power output of 48VAC actually drops, or 2) the relay contact in series with this (located inside the VFD) opens up as a result of a VFD fault. The latter is more likely.
  • Upon investigating the VFD further, it was determined that another fault code was present prior to the above mentioned code 52. This fault code happened very briefly at the same time as K1 toggling, but was then covered up by code 52. This fault code is 2, which states that there's an overvoltage condition - the DC bus voltage has exceeded 911 VDC (135% of device maximum nominal voltage of 500V). This can be attributed to a supply voltage surge in which it is raised 35% above its nominal value.
  • Note: line-to-line voltages were measured at the input to K1 after the "warm up period" and the issue was resolved, when the hoist was sitting idle. These were measured to be almost exactly 480VAC. This represents a reference condition.
  • What seems to be happening is that when the hoist motion attempts to start, there is a line voltage surge for some reason (back-emf?) which causes this fault condition for a temporary instant, but then when the voltage dissipates the fault instantly clears. This explains why contactor K1 very briefly flickers during motion attempt - the fault is only briefly present. But then, fault code 52 takes over and remains present (because of the line voltage disruption).
  • Still, the root case of this issue is unknown. It is not confirmed whether there is actually a voltage surge or not (to be measured next week), and why it seems to only happen for the first ~2 minutes of the day.
    It could be attributed to one of the following reasons:
    • Coil voltage rectifying diode partial failure inside K1... the diode may need time to "warm up"
    • Brake solenoid partial failure for Hoist B (causing additional friction which leads to overvoltage condition for the motor)... the brake may need time to "warm up"
    • Charging capacitor issue in DC bus voltage charge circuit inside the VFD

Suggested troubleshooting steps:

  • Probe line-to-line voltage at input terminals to K1 during attempt to operate Hoist B in max-hold mode: check for surge, record values
  • Probe DC bus voltage during the same condition, record value
  • Determine if the above indicate a true overvoltage condition, and determine why this may be

 

Update, 2024-09-16 [AN]: Under work permit I2024-09-10--3, the following was tested and observed:

  • Probe L1-L2, L2-L3, and L1-L3 line voltages on input side of contactor K1 for Hoist B with max hold set on multimeter to confirm whether there is a surge when attempting to move the hoist - no surge was observed. 480VAC constant was measured in each case.
  • There may be another issue causing the DC bus overvoltage condition (an issue with the motor or an issue with the drive itself)

Suggested troubleshooting steps:

  • Probe DC bus voltage during the faulted condition, record value
  • Disconnect the motor from the drive and check motor winding resistances before and after the "warm up" period to see if there is a change, and also compared to Hoist A
  • With the motor disconnected, attempt to run the drive - determine if fault code 2 shows up or if the drive appears to be working.. this may eliminate the motor from the list of potential issues

 

 Update, 2024-09-16 [AN]: Under work permit I2024-09-16--3, the following was tested and observed:

  • Probed DC bus voltage on Hoist B's VFD prior to attempting to move hoist, and during the attempt to move it. In both cases it remained a constant 690 VDC. No temporary spike was observed. This is also lower than the threshold that the VFD's manual stated the fault would typically occur at (~911 V) so it casts doubt on whether this is the root cause of the problem.
  • Probed DC bus voltage for Hoist A's VFD, for comparison - same measurement.

Suggested troubleshooting steps:

  • Disconnect the motor from the drive and check motor winding resistances before and after the "warm up" period to see if there is a change, and also compared to Hoist A
  • With the motor disconnected, attempt to run the drive - determine if fault code 2 shows up or if the drive appears to be working.. this may eliminate the motor from the list of potential issues
  • Swap VFDs between Hoist A and B to determine if the problem tracks the drive
  • Swap K1 between Hoist A and B

 

Update, 2024-09-16 [AN]: Under work permit I2024-09-16--3, the following was tested and observed:

  • Measured motor winding resistance between every combination of lines for both Hoist A and B (for comparison). Note: this was done mid-day, prior to any use of the crane for the day. In each case, the resistance was measured to be 2.2 Ohms. There is no difference between the A and B. This is not likely the root cause of the issue.

Suggested troubleshooting steps:

  • With the motor disconnected, attempt to run the drive - determine if fault code 2 shows up or if the drive appears to be working.. this may eliminate the motor from the list of potential issues
  • Swap VFDs between Hoist A and B to determine if the problem tracks the drive
  • Swap K1 between Hoist A and B

 

Update after email discussion with Kone service tech, 2024-09-23 [AN]:

The Kone service tech said "This is a obsolete inverter and there is not a direct replacement available or parts for repair .  It is recommended to replace inverter with a conversion panel. The conversion panel consists of new, correctly sized components including D2V inverter, to have the same functionality as original panel. All components mounted and prewired to a back panel that fits directly inside the existing enclosure. All inputs and outputs are terminated at a terminal strip. Interconnecting wiring diagrams are also provided for ease of installation. The lead time for a conversion panel is approximately 10-12 weeks after receipt of a Purchase order."

A quote will be obtained from Kone for the replacement.

Update after K7 swapping between hoist A and B with Jason, Mike, Julie, 2024-10-09 [DW]:

Contactor K7 was swapped between hoist A and B. On hoist B we saw F52 fault and K7 did not engage in properly. On hoist A we saw F51 fault which is "stop limit has be tripped" and K7 also did not engage in properly. After 2 minute wait, both hoist A and B are back to normal. The plan for tomorrow: switch to A hoist and test. right after, switch to B and test. 
UPDATE FOR PAST THREE DAYS TESTS AND PROGRESS.  DAVID WANG 2024-10-12
2024-10-09 noon. Left crane with power on for 1 hour. switched off power on crane for rest of day to Thursday morning test.
2024-10-10 morning. Set to A hoist only. Switched on main power and tested hoist A down. It was normal, no delay. Right away switched to hoist B and tested hoist B down . It was normal, no delay. Switched to A and B and tested hoist up and down. It was normal, no delay.

2024-10-10 noon.  Replaced spark quenchers on hoist B K1 and K7. Tested crane after replacement. Everything works fine. Used crane to lift up F308 around 1:30. Then switched off crane for next morning test.

2024-10-11 morning. Set to A and B. Tested crane hoist down twice 2-3 seconds each time with 3 second between. No fault. Hoist worked fine.  3 second after, tested hoist up and found  K7s were not on on both A and B. 10 seconds after tested hoist up again and it worked. Then tested all crane movements. Everything was normal. Crane was used for spent target moves to 2:30pm . Then switched off.

2024-10-12 morning: Set to A and B.  Right after power on, tested hoist up for 5 seconds. wait 3 seconds, tested hoist down for 5 seconds. Repeated same up and down test within 3 seconds.  No fault. the hoist A and B works fine. Tested N-S,E-W movement. all good.  at the last, tested hoist up and down 10 seconds each. Hoist A and B are still good. The plan for next morning test: Leave crane power off to Tuesday morning and test hoist A+B(48 hours power off). Also plan to replace spark quenchers on hoist A /K1 and K7 if any delay is found on Tuesday morning test. 
2024-10-15 morning: Set to A and B. tested hoist up and down one click on each. I saw K7 momentarily "on" then drop off symptom as before. I saw F51 on A and F52 on B. Switched to A right away and tested. A works fine. Then switched to B and tested. B works fine. The total test time from A+B to B then to A is about 30 seconds. Then i switched back to A+B for checking. everything works fine as anticipated.

2024-10-16  A-K7 snubber was replaced yesterday after morning test . Tested hoist A+B this morning after 24 hours crane power off. Hoist A+B works fine on both up and down. 

2024-10-17 Tested A+B hoist this morning after 22 hours crane power off. Hoist A+B works fine on both up and down.

2024-10-18 Tested A+B hoist this morning after 24 hours crane power off. Hoist A+B works fine on both up and down.

2024-10-21 Tested A+B hoist this morning after 72 hours crane power off. The delay issue on hoist B appeared. Hoist A is fine. B/K7 was momentarily on then dropped off. By the same time B/K1 was momentarily dropped off then on with F52 code.

2024-10-22 Tested A+B hoist from target hall this morning after 22 hours crane power off.  hoist A+B  works fine on both up and down.

2024-10-22 noon.  Swapped  hoist A/K1 and B/K1. Crane power off at 9:30am after flask/pail repacking job.

2024-10-24. Tested A+B hoist this morning after 46 hours crane power off. Hoist A+B works fine on both up and down. Crane will be left as power off for 48 hours for next test.

2024-11-21. In the past month ,I tested crane multiple times. 24 hours power off test results are good always. 48 hours or longer power off test results are not consistent. Hoist B had 15 to20 seconds delay on early checks after last e -log. But in recent  6 days power off check and 3 days power off check, the hoist B has no delay. The next step: 1, keep on multiple days power off check. 2, replace Hoist B K1 (line+auxiliary) contactors with new parts.

2024-11-25. tested hoist A+B this morning after 4 days crane power off. No delay. everything works fine. Replacement contactors for K1 have been requested. Line contactor is in 5 weeks back order status from Digikey so we will replace Hoist B K1 next year in January mostly.

2024-12-02. Tested hoist A+B this morning after 4 days crane power off. No delay. everything works fine. It looks like  no delay status is stable now by watching on past 40days morning check result. Daily and multiple days check on hoist B  will be kept on.

2025-01-02. Tested hoist A+B this morning. The last time of crane using in 2024 should be 18th Dec. No delay . Crane works fine.

Entry  Tuesday, September 03, 2024, 11:40, David Wang, ITW, Standard Operation, TM3, TiC#8, TM3 has been moved from ITW to S-W silo. 

TM3 has been moved from ITW to S-W silo. The move is smooth.

Entry  Tuesday, August 27, 2024, 13:30, Frank Song, ITW, Maintenance, TM3, TiC#8, module disconnection TM3_with_TiC#8_disconnection_in_ITW_2024-08-27.pdf

 TM3 is disconnected and all water lines are purged/refilled with 50psi Nitrogen. Pls see pics attached with.

Entry  Monday, August 26, 2024, 09:57, Travis Cave, Spent Target Vault, Standard Operation, , Nb#10, Spent Target Move Target_Index_2024-08-26.pdf

 Nb#10 was placed in pail#295 and moved from the south hot cell to the spent target vault, spot 4A, it was 230uSv/hr when removed from the SHC. See attached PDF for vault details.

Entry  Wednesday, August 21, 2024, 10:53, Travis Cave, Spent Target Vault, Standard Operation, , TiC#7, Spent Target Move Target_Index_2024-08-21.pdf

 TIC#7 was placed in pail #294 and then moved from the south hot cell to the spent target vault spot 4B. It was 82.3mSv/hr when removed from the SHC. See attached PDF for details.

ELOG V2.9.2-2455