Logbooks Lab Maintenance Evaporator_1 Evaporator_2 Laser cutter Target Production Test-Stand RH-ISAC RH-Cyclotron RH-Meson Hall RH-Beamlines RH-ARIEL
  RH-ISAC, Page 136 of 138  Not logged in ELOG logo
Entry  Friday, February 16, 2024, 10:57, Adam Newsome, Spent Target Vault, Standard Operation, , , F-308 flask loading - three flasks loaded Target_Index_2024-02-16.xlsm

Today (2024-02-16), three flasks were loaded in preparation for shipment with the following targets: UCx#39, UCx#41 and C#3.

Note: prior to shipping, the pails within these flasks will need to have a crimped lifting loop added to them.

The mini-storage vault is now EMPTY.

The target index spreadsheet has been updated.

Entry  Friday, March 15, 2024, 10:04, Adam Newsome, South Hot-Cell, Standard Operation, TM3, , TM3 Move - SHC to NE Silo 

TM3 was moved from SHC to NE silo. Cable tray was recently installed.

Entry  Tuesday, June 18, 2024, 15:04, Adam Newsome, Spent Target Vault, Maintenance, , , Annual Inspection of Main and Mini Storage Vaults  PXL_20240618_213819000.jpgPXL_20240618_213905146.jpg

The annual inspection of the main storage vault will be skipped this year since extensive inspection and testing was performed during the re-alignment job in May (see this e-log for details: e-log 2584). This included successful testing of the emergency close mechanism.

The mini storage vault was informally inspected by Adam Newsome on 2024-06-18. The following was checked:

  • Cabling and cable management
  • Connectors (note: the strain relief for the remote control input cable was loose - it was tightened)
  • Controls box - external connectors and wiring
  • Limit switches - wiring, mounting
  • Motor visual check
  • Drive wheels visual check
  • Drive track visual check (note: the track had small pieces of metal shavings on it which were wiped off)
  • Bearings visual check
  • Miscellaneous fasteners visual check

Photos can be found in G:\remote handling\Facilities and Projects\ISAC\Mini Storage Vault\Inspections

The inspection was generally successful. No areas of concern. Regular inspection will resume next year.

Entry  Tuesday, June 18, 2024, 15:16, Adam Newsome, Crane, Maintenance, , , Pail Handling Tool - Annual Inspection PXL_20240618_213742257.jpgPXL_20240618_213455575.jpgPXL_20240618_213626154.jpg

The pail handling tool which is used to move 5 gallon pails around the target hall was inspected by A. Newsome. The main tool was checked, but not the spare as it is currently in a disassembled and unusable state. The tool appears to be in good, working condition. The following items were checked.

  • Inspect structural integrity of long tube - check for bending, warpage, dents, cracks, rust/corrosion
  • Inspect structural integrity of hook assembly - check for bending, warpage, dents, cracks, rust/corrosion
  • Inspect fasteners - check all fasteners installed as per drawings, fasteners are tight (note: one fastener on crane interface subassembly was loose - all fasteners in these subassembly were tightened as a precautionary measure)
  • Inspect crane interface assembly - check for damage and wear where the tool connects to the crane
  • Check alignment - verify the tool connects to the crane and hangs vertically with no major misalignment (this step was not done - alignment is regularly checked during operations, and there was no concern as the tube was straight)

 Note: these tools are being inspected monthly by D. Wang and F. Song.

Photos can be found in G:\remote handling\Facilities and Projects\ISAC\Pail Handling Tool\Inspections\June 2024

IMPORTANT: the maximum load capacity and lifter tag for the tool are not present. There is a plan in place to affix these two labels in the very near future.

Entry  Friday, July 05, 2024, 14:13, Adam Newsome, Waste Package/Ship, Standard Operation, , , F-308 flask loading - five flasks loaded  Target_Index_2024-07-05.xlsm

Today (2024-07-05), five flasks were loaded in preparation for shipment with the following targets: SiC#43 (Pail 272), SiC#44 (Pail 275), Ta#63 (Pail 266), UC#40 (Pail 278), UC#38 (Pail 274).

The crimped lifting loop has been applied to each pail. The locks have not yet been applied to the flasks at the time of writing this log, but will be done next week.

The mini-storage vault is now EMPTY. Because some of the pails had contamination (200 - 1500 CPM), it is highly recommended that the mini-storage vault be cleaned or at least swiped for contamination.

The target index spreadsheet has been updated (attached).

Entry  Wednesday, July 10, 2024, 09:55, Adam Newsome, Safe Module Parking, Maintenance, , , Annual electrical and mechanical inspection of Safe Module Parking  PXL_20240710_160636341.jpgPXL_20240710_160506424.jpg

An annual electrical and mechanical inspection was performed on the SMP today by Adam Newsome, Travis Cave, Riley Sykes, Maico Dalla Valle, and Jason Zhang as per Work Permit  I2024-07-10-1. The following items were checked:

  • Check condition of wiring for physical/radiation/UV damage
    • OK, no signs of significant damage
  • Check for cable tray debris or damage
    • There were multiple items in the cable tray which do not belong there - misc. pieces of wood and metal... these items were not disposed of but should be in the near future
  • Check inside control panel: components and wiring, labeling
    • Labeling intact, components in good condition
    • Misc. random wires pulled to check... OK
  • Verify camera views
    • This was not inspected in detail as it is something that would be done prior to remote operation
  • Check connectors for damage
    • All connectors OK
  • Check pendant for damage and verify labeling intact
    • Pendant OK
  • Lid open/close test
    • Lid was opened and close multiple times - smooth operation, no concern
    • Lid close logic was checked - OK
  • Vessel rotation and limit switch check
    • Both CW and CCW limits were checked, functioning
    • No concern on rotation - chain drive, rotary limit switch, and cable reel all functioning normally
  • Lubricate chains
    • Not done - should be done next year
  • Inspect tensioners
    • Done - OK
  • Tighten fasteners
    • Not all fasteners were checked - washers will be replaced soon so this was not deemed necessary
    • Approximately 50 washers are missing for socket head screws in slotted parts. Not safety critical but they should be replaced soon.
  • Visual inspection
    • All OK

Overall the inspection was successful.

Entry  Wednesday, July 17, 2024, 15:06, Adam Newsome, Facilities, Standard Operation, , , Safety Walkaround Complete - SHC/NHC Area 

A safety walkaround for July 2024 was completed for the B2 level by A. Newsome. No deficiencies to report.

Results can be found in the master spreadsheet

Entry  Thursday, September 05, 2024, 12:50, Adam Newsome, Crane, Maintenance, , , Overhead crane: up/down hoist delayed start issue 

D. Wang reports that recently the overhead crane has been exhibiting a delayed start (approx. 1 min) when operating in local mode. This issue applies to the main hoist's up/down functionality only and does not seem to apply to other motion axes. T. Kauss briefly investigated but did not find any obvious cause.

This issue will be monitored and investigated over the following weeks and this log will be updated as more information becomes available.

Suggested troubleshooting steps:

  • Isolate the issue to local or remote mode to confirm this suspicion (it appears to be present in both local and remote mode)
  • With help from someone locally operating the crane, confirm at the receiver in the control room whether signals are coming through for up/down commands immediately, or whether the receipt of signals itself is delayed. At the same time, confirm whether the PLC input card is receiving the command signals from the receiver immediately or in a delayed manner. (it appears that the command signals are being received by the VFDs - the issue seems to be on the output side)
  • Time the delay, and repeat to confirm if the timing is consistent every time as reported by D. Wang (it appears timing is inconsistent. After a weekend of no use, it was approximately 2 minutes. After a few hours of no use, it was around 10 seconds).
  • Check if delay is present across various crane positions in the target hall (completed by D. Wang - result: yes)
  • Inspect controls hardware in the cabinet in the control room, as well as the remote IO on top of the crane, for obvious issues (checked cabinet but not remote IO - nothing obvious)
  • Go online with the PLC and test up/down commands to see if the program indicates any obvious issues (note: this may not actually help - seems to be an electrical issue isolated to the VFD-related electronics)
  • Mechanical inspection of the motor (not likely an issue)
  • Disconnect motor, repeat test of trying to command up/down motion and see if the motor itself had any effect on the delay

Update, 2024-09-09 [DW]:  Confirmed both local and remote mode have the same problem on delayed main hoist functional issue after crane was switched on. The delay time is 1 minutes 40 seconds to 2 minutes. It happens mostly when first time the crane was switched on. But if the crane was not used in the rest of day after first time switching on , the problem showed again 5 to 6 hours after in same day.

Update, 2024-09-09 [AN]: Checked again around 11:30am... tested running both hoists A and B in remote mode. Upon first attempt to lower hoist, no motion occurred. Hoist A VFD exhibited fault code 51, and Hoist B exhibited fault code 52. Both hoists appeared to receive the command from the PLC to attempt to move. Both hoists (initially) had their "ready" status as ON. When attempting to move, however, hoist B's ready status dropped out. Note also that the delay observed between the failed attempt to start, and when motion was actually possible, was only approximately 10-20 seconds. Perhaps this correlates to the fact the crane was recently operated this morning. It is suspected that the charging circuit for hoist B's DC bus voltage is faulty.
Tomorrow, another test will be performed by checking A and B independently to see if one can run but the other cannot.

 

Update, 2024-09-10 [AN]: Tested the crane using only Hoist A: working upon first power-up of the day. Tested using only Hoist B: not working, fault 52 re-appears. We are certain the issue lies with Hoist B. Furthermore, upon observing the motor contactors and status LEDs when attempting to energize the motor, the following is observed:

  • For Hoist A (normal, working operation) - on power up K1 toggles on (in). When pressing down button, K7 toggles (in). The hoist begins to move.
  • For Hoist B (non-functioning), on power up K1 toggles on (in). When pressing down button, K7 does NOT toggle. K1 toggles OFF (it should not) then comes back, then triggers the fault.

Upon further inspection it was noted that for K1 for Hoist B, there appears to be a snubber (XEB2202) wired in across the A1 and A2 terminals of the contactor. The fact that a time-dependent circuit is involved matches earlier theories about a charge-timing related issue. Suggested action: attempt to remove the snubber and test again to determine if the issue persists.

 

Update, 2024-09-11 [AN]: Under work permit I2024-09-10--3, the following was tested and observed:

  • Run Hoist B to confirm it is not working (expected behaviour) - confirmed
  • Power off crane, remove the snubber from Hoist B's contactor K1
  • Power on crane, attempt to run Hoist B - the hoist did not run
  • It was noticed also that Hoist A did in fact have a snubber installed as well - it was hidden. The snubber for Hoist B's capacitance was measured and confirmed to match what it should be, so it is suspected that it is working fine.
  • This indicates that the snubber is not the issue. The snubber was reinstalled and the system returned back to normal state. Tested - working.

At this time the root cause remains unknown, but the snubber has been eliminated from the possibilities.

It appears that the issue can be isolated to the fact that contactor K1 momentary toggles off when attempting to operate the hoist. This short blip would explain the fault code related to insufficient line voltage. The drawings indicate the only way that K1 can turn off is if the 48 VAC supply from the transformer drops out momentarily, or if an unnamed relay located (presumably) in the VFD momentary toggles.

Further troubleshooting steps could include:

  • Probe for 48VAC at A2 terminal of K1 and attempt to operate the hoist - see if it drops off briefly. If so, the issue is either the transformer or the relay contact in the VFD. Perform continuity check across the relay contact, repeat attempt to operate hoist, and determine if it is the relay contact causing the issue. This test will significantly isolate the issue.
  • Check inside Hoist B's VFD circuitry and measure the DC bus voltage during attempted operation, and compare to Hoist A. There may be an issue with the charging circuit inside the inverter.

 

Update, 2024-09-13 [AN]: Under work permit I2024-09-10--3, the following was tested and observed:

  • Probe the A2 terminal of K1 for Hoist B with respect to the transformer's 0V output upon initial power-up of the system: ~53VAC measured (should be 48VAC but 53 is acceptable)... this confirms that the appropriate relay coil voltage is present upon power-on, as expected because it is observed that the relay toggles upon power-up.
  • Continue probing A2 while attempting to jog Hoist B down, with min-hold set on the multimeter to check for voltage drops: the voltage measured was approximately 24VAC during one attempt and 36VAC during another. This implies that K1's coil voltage does in fact drop out instantaneously, resulting in K1 very briefly disengaging which causes the observed VFD vault. The fact that the measured min voltage is different can be attributed to the mulitmeter's sampling rate, catching the voltage during its decline towards 0.
  • Because of the aforementioned test results, it is confirmed that there is an issue associated with K1's coil voltage briefly dropping out when attempting to run the hoist. There are only two reasons this could happen: 1) the transformer power output of 48VAC actually drops, or 2) the relay contact in series with this (located inside the VFD) opens up as a result of a VFD fault. The latter is more likely.
  • Upon investigating the VFD further, it was determined that another fault code was present prior to the above mentioned code 52. This fault code happened very briefly at the same time as K1 toggling, but was then covered up by code 52. This fault code is 2, which states that there's an overvoltage condition - the DC bus voltage has exceeded 911 VDC (135% of device maximum nominal voltage of 500V). This can be attributed to a supply voltage surge in which it is raised 35% above its nominal value.
  • Note: line-to-line voltages were measured at the input to K1 after the "warm up period" and the issue was resolved, when the hoist was sitting idle. These were measured to be almost exactly 480VAC. This represents a reference condition.
  • What seems to be happening is that when the hoist motion attempts to start, there is a line voltage surge for some reason (back-emf?) which causes this fault condition for a temporary instant, but then when the voltage dissipates the fault instantly clears. This explains why contactor K1 very briefly flickers during motion attempt - the fault is only briefly present. But then, fault code 52 takes over and remains present (because of the line voltage disruption).
  • Still, the root case of this issue is unknown. It is not confirmed whether there is actually a voltage surge or not (to be measured next week), and why it seems to only happen for the first ~2 minutes of the day.
    It could be attributed to one of the following reasons:
    • Coil voltage rectifying diode partial failure inside K1... the diode may need time to "warm up"
    • Brake solenoid partial failure for Hoist B (causing additional friction which leads to overvoltage condition for the motor)... the brake may need time to "warm up"
    • Charging capacitor issue in DC bus voltage charge circuit inside the VFD

Suggested troubleshooting steps:

  • Probe line-to-line voltage at input terminals to K1 during attempt to operate Hoist B in max-hold mode: check for surge, record values
  • Probe DC bus voltage during the same condition, record value
  • Determine if the above indicate a true overvoltage condition, and determine why this may be

 

Update, 2024-09-16 [AN]: Under work permit I2024-09-10--3, the following was tested and observed:

  • Probe L1-L2, L2-L3, and L1-L3 line voltages on input side of contactor K1 for Hoist B with max hold set on multimeter to confirm whether there is a surge when attempting to move the hoist - no surge was observed. 480VAC constant was measured in each case.
  • There may be another issue causing the DC bus overvoltage condition (an issue with the motor or an issue with the drive itself)

Suggested troubleshooting steps:

  • Probe DC bus voltage during the faulted condition, record value
  • Disconnect the motor from the drive and check motor winding resistances before and after the "warm up" period to see if there is a change, and also compared to Hoist A
  • With the motor disconnected, attempt to run the drive - determine if fault code 2 shows up or if the drive appears to be working.. this may eliminate the motor from the list of potential issues

 

 Update, 2024-09-16 [AN]: Under work permit I2024-09-16--3, the following was tested and observed:

  • Probed DC bus voltage on Hoist B's VFD prior to attempting to move hoist, and during the attempt to move it. In both cases it remained a constant 690 VDC. No temporary spike was observed. This is also lower than the threshold that the VFD's manual stated the fault would typically occur at (~911 V) so it casts doubt on whether this is the root cause of the problem.
  • Probed DC bus voltage for Hoist A's VFD, for comparison - same measurement.

Suggested troubleshooting steps:

  • Disconnect the motor from the drive and check motor winding resistances before and after the "warm up" period to see if there is a change, and also compared to Hoist A
  • With the motor disconnected, attempt to run the drive - determine if fault code 2 shows up or if the drive appears to be working.. this may eliminate the motor from the list of potential issues
  • Swap VFDs between Hoist A and B to determine if the problem tracks the drive
  • Swap K1 between Hoist A and B

 

Update, 2024-09-16 [AN]: Under work permit I2024-09-16--3, the following was tested and observed:

  • Measured motor winding resistance between every combination of lines for both Hoist A and B (for comparison). Note: this was done mid-day, prior to any use of the crane for the day. In each case, the resistance was measured to be 2.2 Ohms. There is no difference between the A and B. This is not likely the root cause of the issue.

Suggested troubleshooting steps:

  • With the motor disconnected, attempt to run the drive - determine if fault code 2 shows up or if the drive appears to be working.. this may eliminate the motor from the list of potential issues
  • Swap VFDs between Hoist A and B to determine if the problem tracks the drive
  • Swap K1 between Hoist A and B

 

Update after email discussion with Kone service tech, 2024-09-23 [AN]:

The Kone service tech said "This is a obsolete inverter and there is not a direct replacement available or parts for repair .  It is recommended to replace inverter with a conversion panel. The conversion panel consists of new, correctly sized components including D2V inverter, to have the same functionality as original panel. All components mounted and prewired to a back panel that fits directly inside the existing enclosure. All inputs and outputs are terminated at a terminal strip. Interconnecting wiring diagrams are also provided for ease of installation. The lead time for a conversion panel is approximately 10-12 weeks after receipt of a Purchase order."

A quote will be obtained from Kone for the replacement.

Update after K7 swapping between hoist A and B with Jason, Mike, Julie, 2024-10-09 [DW]:

Contactor K7 was swapped between hoist A and B. On hoist B we saw F52 fault and K7 did not engage in properly. On hoist A we saw F51 fault which is "stop limit has be tripped" and K7 also did not engage in properly. After 2 minute wait, both hoist A and B are back to normal. The plan for tomorrow: switch to A hoist and test. right after, switch to B and test. 
UPDATE FOR PAST THREE DAYS TESTS AND PROGRESS.  DAVID WANG 2024-10-12
2024-10-09 noon. Left crane with power on for 1 hour. switched off power on crane for rest of day to Thursday morning test.
2024-10-10 morning. Set to A hoist only. Switched on main power and tested hoist A down. It was normal, no delay. Right away switched to hoist B and tested hoist B down . It was normal, no delay. Switched to A and B and tested hoist up and down. It was normal, no delay.

2024-10-10 noon.  Replaced spark quenchers on hoist B K1 and K7. Tested crane after replacement. Everything works fine. Used crane to lift up F308 around 1:30. Then switched off crane for next morning test.

2024-10-11 morning. Set to A and B. Tested crane hoist down twice 2-3 seconds each time with 3 second between. No fault. Hoist worked fine.  3 second after, tested hoist up and found  K7s were not on on both A and B. 10 seconds after tested hoist up again and it worked. Then tested all crane movements. Everything was normal. Crane was used for spent target moves to 2:30pm . Then switched off.

2024-10-12 morning: Set to A and B.  Right after power on, tested hoist up for 5 seconds. wait 3 seconds, tested hoist down for 5 seconds. Repeated same up and down test within 3 seconds.  No fault. the hoist A and B works fine. Tested N-S,E-W movement. all good.  at the last, tested hoist up and down 10 seconds each. Hoist A and B are still good. The plan for next morning test: Leave crane power off to Tuesday morning and test hoist A+B(48 hours power off). Also plan to replace spark quenchers on hoist A /K1 and K7 if any delay is found on Tuesday morning test. 
2024-10-15 morning: Set to A and B. tested hoist up and down one click on each. I saw K7 momentarily "on" then drop off symptom as before. I saw F51 on A and F52 on B. Switched to A right away and tested. A works fine. Then switched to B and tested. B works fine. The total test time from A+B to B then to A is about 30 seconds. Then i switched back to A+B for checking. everything works fine as anticipated.

2024-10-16  A-K7 snubber was replaced yesterday after morning test . Tested hoist A+B this morning after 24 hours crane power off. Hoist A+B works fine on both up and down. 

2024-10-17 Tested A+B hoist this morning after 22 hours crane power off. Hoist A+B works fine on both up and down.

2024-10-18 Tested A+B hoist this morning after 24 hours crane power off. Hoist A+B works fine on both up and down.

2024-10-21 Tested A+B hoist this morning after 72 hours crane power off. The delay issue on hoist B appeared. Hoist A is fine. B/K7 was momentarily on then dropped off. By the same time B/K1 was momentarily dropped off then on with F52 code.

2024-10-22 Tested A+B hoist from target hall this morning after 22 hours crane power off.  hoist A+B  works fine on both up and down.

2024-10-22 noon.  Swapped  hoist A/K1 and B/K1. Crane power off at 9:30am after flask/pail repacking job.

2024-10-24. Tested A+B hoist this morning after 46 hours crane power off. Hoist A+B works fine on both up and down. Crane will be left as power off for 48 hours for next test.

2024-11-21. In the past month ,I tested crane multiple times. 24 hours power off test results are good always. 48 hours or longer power off test results are not consistent. Hoist B had 15 to20 seconds delay on early checks after last e -log. But in recent  6 days power off check and 3 days power off check, the hoist B has no delay. The next step: 1, keep on multiple days power off check. 2, replace Hoist B K1 (line+auxiliary) contactors with new parts.

2024-11-25. tested hoist A+B this morning after 4 days crane power off. No delay. everything works fine. Replacement contactors for K1 have been requested. Line contactor is in 5 weeks back order status from Digikey so we will replace Hoist B K1 next year in January mostly.

2024-12-02. Tested hoist A+B this morning after 4 days crane power off. No delay. everything works fine. It looks like  no delay status is stable now by watching on past 40days morning check result. Daily and multiple days check on hoist B  will be kept on.

2025-01-02. Tested hoist A+B this morning. The last time of crane using in 2024 should be 18th Dec. No delay . Crane works fine.

Entry  Wednesday, September 11, 2024, 13:38, Adam Newsome, North Hot-Cell, Repair, , , Spare Manipulator - Y-axis motion not functioning 

Update 2024-09-12: C. Fisher confirms this is actually normal behaviour. The Y-axis motion is inhibited when X and Z are in a certain configuration, which was true when the manipulator was mounted on the wall in storage position. Non-issue. A note about this has been made in the operator manual.

 

It was observed today that for the spare manipulator (Model N, Serial 9351), the Y-axis motion was not functioning. X motion and Z motion are functioning as expected. Note that the Y-axis linear actuator was recently replaced and successfully tested and has not been used since the replacement (see e-log 2422).

The following information was determined from troubleshooting:

  • X motion functioning correctly
  • Z motion functioning correctly
  • Y motion selection functioning correctly - the indicator light on the operator control box lights up to confirm Y is selected, and the relay on the main control board which selects the Y axis toggles, as expected
  • Y motion motor contactors do not function as expected when toggling in either direction. Note that these contactors are shared in common with the other two motion axes, so they are functioning correctly and presumably the motor controller itself is as well, since the other axes work.
  • Most likely, because of the above reasons, there is an issue with the inhibit signal that is specific to the Y axis. The inhibit signal which runs from the main control board to the motor control board is shared in common with all motion axes (same for the two direction select signals). It is suspected (not yet confirmed), that the microcontroller is not outputting the inhibit signal as it should be. This is not due to a limit switch related issue because the Y axis does not have limits. Troubleshooting this has been difficult because the schematics do not show the full extent of the circuitry.

Further troubleshooting steps:

  1. Check inhibit signal functionality - is it working for Y axis?
  2. Check direction control signal functionality - are they working for Y axis?
  3. Swap motor select wires to "trick" the microcontroller into thinking it is running a different motor, to further isolate the issue.
  4. If required, swap the entire main control board with another manipulator's, to see if the issue is related to the circuitry which is not described in the schematics.

At the time, this issue is not deemed critical because this is a spare unit.

 

Entry  Thursday, September 12, 2024, 14:12, Adam Newsome, Waste Package/Ship, Development, , SiC#44, Failed waste emplacement at CNL: Flask #22, Pail 275 (SiC#44) Report_on_Inspection_of_Pail_275_on_2024-10-03.pdf

On 2024-09-12, CNL reported difficulty with lowering Pail 275 from F-308 flask #22 into their tile hole. The four other pails associated with this shipment were successfully emplaced. It is suspected that the issue was related to a kink in the lowering cable, causing a jam during lowering. This was observed approximately 1 year ago.

Discussions with CNL to learn more about the root case are ongoing, and this e-log will be updated when more information is learned. It is expected that this flask will be shipped back to TRIUMF and the pail will need to be repacked.

 

Update 2024-09-12: email from A. Swan at CNL: "4 of the 5 flask were emplaced successfully with F308 #22 SiC#44 having an unsuccessful pull test and left within the flask."
This is the extent of the information they have provided and is not sufficiently helpful to determine what the issue is.

 

Updated 2024-10-03: an investigation was performed (work permit C2024-09-26-10). It was determined that the root cause of the issue is attributed to an assembly error - the cable was incorrectly routed through the holder part. See attached report for full details.

 

Entry  Thursday, October 10, 2024, 12:20, Adam Newsome, Facilities, Standard Operation, , , Safety Walkaround complete - ISAC Hot Cells, Target Hall 

A safety walkaround was completed for the ISAC Hot Cells and Target Hall Areas.

The resulting spreadsheet can be found on DocuShare as Document-242733.

Main deficiencies identified:

  • Hot Cells:
    • Phone not working at North hot cell operator station
  • Target Hall:
    • Uncertain if NHC and SHC ventilation pressure gauges have recently been inspected

Action has been taken on all deficiencies.

Entry  Monday, November 25, 2024, 09:56, Adam Newsome, South Hot-Cell, Standard Operation, TM3, , TM3 move - silo to SHC 

TM3 was moved from silo to SHC with no target. Move successful.

Entry  Tuesday, November 26, 2024, 13:54, Adam Newsome, Conditioning Station, Standard Operation, TM3, , TM3 move - SHC to TCS 

TM3 was moved from SHC to TCS with UCx#47. The move was smooth.

Entry  Tuesday, December 03, 2024, 10:51, Adam Newsome, South Hot-Cell, Standard Operation, TM2, , TM2 move - ITE to SHC 

TM2 was moved from ITE to SHC with Ta#68 target. During the move, a railing was lightly bumped by the corner of the module - no obvious damage or issues have been observed. Otherwise, the move was smooth.

Entry  Tuesday, December 03, 2024, 12:28, Adam Newsome, South Hot-Cell, Standard Operation, TM2, , TM3 move - TCS to ITE 

TM3 was moved with UCx#47 from TCS to ITE. The move was smooth.

Entry  Wednesday, December 04, 2024, 10:40, Adam Newsome, South Hot-Cell, Standard Operation, TM2, , TM2 move - SHC to silo 

TM2 was remotely moved from the South Hot Cell to SW Silo. Reading at SHC = 89.8 mSv/h. The move was smooth.

Entry  Thursday, December 05, 2024, 12:00, Adam Newsome, Spent Target Vault, Standard Operation, , Ta#68, Spent target move - Ta#68 in Pail 300 Storage_Vault_Contents_2024-12-05.pdf

Ta#68 spent target was moved in Pail 300 to storage vault location 5A. The dose rate measured at the South Hot Cell was 429 mSv/h at 1m. The target index spreadsheet has been updated and the current vault configuration is attached to this e-log. At this time, the storage vault is getting quite full - it is planned to remove five pails in January 2025.

Entry  Wednesday, December 18, 2024, 12:18, Adam Newsome, Spent Target Vault, Standard Operation, , , Five Pails Transferred from Main Vault to Mini Vault MiniVault_2024-12-18.pdf

Five pails were transferred from main vault to mini vault. Note that there was already one pail in the mini vault, so at present there are six pails staged for packaging into F-308s.

Nb#10, Pail 295, Tray 4A
C#4, Pail 277, Tray 4C
Ta#64, Pail 273, Tray 5C
Ta#62 with wire scanner, Pail 256, Tray 7A
TiC#5, Pail 279, Tray 3A

The target index has been updated accordingly (see attached but note that the summary does not reflect the sixth pail).

Entry  Wednesday, December 18, 2024, 14:17, Adam Newsome, Pail Handling Tool, Maintenance, , , Pail Handling Tool - Inspection ISAC_Pail_Handling_Tool_Inspection_Record_-_2024.pdf

An informal inspection of the main Pail Handling Tool was performed today as the final one for the year. Note that the main formal annual inspection was performed June 18, 2024 for this year.

The attachment to this e-log serves as a record of all inspections for this tool throughout 2024.

The spare tool was not inspected as it was not in service this year.

Entry  Friday, January 03, 2025, 09:59, Adam Newsome, Facilities, Standard Operation, , , Safety Walkaround complete - ISAC Hot Cells, Target Hall 

A safety walkaround was completed for the ISAC Hot Cells and Target Hall Areas.

The resulting spreadsheet can be found on DocuShare as Document-242733.

Main deficiencies identified:

  • Hot Cells:
    • NHC right manipulator gripper stuck (known, repair planned)
  • Target Hall:
    • 50% of overhead light bulbs burnt out (David Wang contacted Electrical Services to rectify)
    • One RH camera for target pit not working (Travis Cave notified)

Action has been taken on all deficiencies.

ELOG V2.9.2-2455