DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s “amendments” filed on 12/26/2025 has been considered.
Claims 1 and 21 are amended. Claims 22-23 are canceled. Claims 27-31 are added. Claims 1-5, 8-21, and 24-31 remain pending in this application and an action on the merits follow.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 24, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2023/0230033 to Hagen et al., in view of U.S. Patent Application Publication No. 2017/0355081 to Fisher et al.
With regard to claim 1, Hagen discloses a delivery system comprising:
a pallet sled including a pair of tines extending forward from a base, and a load wheel supporting each of the pair of tines, the load wheels each configured to move away from the respective tine to lift the respective tine upward relative to a floor (Fig. 1, paragraph 35, The cart 106, shown as 106A earlier and 106B later as the cart moves forward, travels past shelves 102. The cart 106 can be any sort of cart or other device that can be moved through a retail or inventory environment. Examples include, but are not limited to, shopping carts, pallet jacks, floor cleaners, lifts, autonomous inventory-moving robots, etc. In many cases, the cart 106 can include wheels, a handle or other fixture for moving the cart, and hardware (e.g., baskets, motors, scrub brushes, bags, forklift tines) for purposes other than facilitating imaging of the environment (e.g., transporting inventory, cleaning floors).);
a camera on the pallet sled (Fig. 1, paragraph 36, The fixed camera 110 can be non-movably affixed to the cart 106 to capture images of the environment around the cart 106.); and
at least one computer configured to receive images from the camera, the at least one computer programmed to analyze the images received from the camera (Fig. 1, paragraph 39, To accomplish this, the controller 108 can receive the first images 140 from the fixed camera 110, as indicated by step A (150). The controller 108 can analyze the first images 140 to identify stock conditions in the shelf 102, as indicated by step B (152).).
However, Hagen does not disclose wherein the at least one computer is programmed to determine a cleanliness level of a store based upon the images received from the camera, including determining store aisles having dirty areas and liquid spills, wherein the at least one computer is programmed to determine aisles having dirty areas using a machine learning model trained on labeled images of store aisles with and without dirty areas.
However, Fisher teaches wherein the at least one computer is programmed to determine a cleanliness level of a store based upon the images received from the camera, including determining store aisles having dirty areas and liquid spills, wherein the at least one computer is programmed to determine aisles having dirty areas using a machine learning model trained on labeled images of store aisles with and without dirty areas (In some implementations, memory 302 can store a library 324 of images of, for example, spills. In some implementations, this library 324 can include images of spills with different compositions (e.g., water and/or other chemicals) in different lighting conditions, angles, sizes, distances, clarity (e.g., blurred, obstructed/occluded, partially off frame, etc.), colors, surroundings, etc. Library 324 can be used to train controller 304 to identify spills in many conditions will be discussed more at least with reference to FIG. 11, as well as throughout this disclosure. As yet another example, various robots (e.g., that are associated with a manufacturer) can be networked so that images captured by individual robots are collectively shared with other robots. In such a fashion, these robots are able to “learn” and/or share imaging data in order to facilitate the ability to readily detect spills. The images of library 324 can be identified (e.g., labelled by a user (e.g., hand-labelled) or automatically, such as with a computer program that is configured to generate/simulate library images of spills and/or label those library images). In some implementations, library 324 can also include images of spills in different lighting conditions, angles, sizes (e.g., distances), clarity (e.g., blurred, obstructed/occluded, partially off frame, etc.), colors, temperatures, surroundings, etc. From these images, controller 304 can first be trained to identify the spills. Spill detector 112 can then use that training to identify spills in image(s) obtained in portion 1102. Library 324 can then be used in a supervised or unsupervised machine learning algorithm for controller 304 to learn to identify/associate patterns in images with spills., paragraphs 80 and 151).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hagen to include, wherein the at least one computer is programmed to determine a cleanliness level of a store based upon the images received from the camera, including determining store aisles having dirty areas and liquid spills, wherein the at least one computer is programmed to determine aisles having dirty areas using a machine learning model trained on labeled images of store aisles with and without dirty areas, as taught in Fisher, in order to automatic detect spills (Fisher, paragraph 2).
With regard to claim 2, Hagen discloses the at least one computer is programmed to identify SKUs of products in the images received from the camera (paragraph 41, the controller 108 analyzing the second images 142 to detect, for example, identifying information (e.g., barcode, UPC number, product name) from product labels 122).
With regard to claim 3, Hagen discloses the at least one computer is programmed to determine an inventory level of the products in the images (paragraph 41, The controller 108 can then receive second images 142 from the movable camera 112 from the area around the target locations (e.g., locations 124 and 126), as indicated by step E (158), and analyze those second images 142 to identify a product that corresponds to the stock condition, as indicated by step F (160).).
With regard to claim 4, Hagen discloses the at least one computer is programmed to determine the inventory level of the products on shelves in the images (paragraphs 38-39, The controller 108 can identify specific items from the shelves 102 that have inventory conditions, such as being out of stock).
With regard to claim 5, the combination of references discloses and teaches the at least one computer is programmed to analyze pathways traveled by the pallet sled in the images received from the camera, including determining whether the pathways traveled by the pallet sled have obstacles (Fisher, paragraphs 80 and 151, abstract, a robot can have a spill detector comprising at least one optical imaging device configured to capture at least one image of a scene containing a spill while the robot moves between locations. Examiner notes that determining whether there are spills (i.e., obstacles) while the robot moves between locations, which is considered as “the at least one computer is programmed to analyze pathways traveled by the pallet sled in the images received from the camera, including determining whether the pathways traveled by the pallet sled have obstacles”).
With regard to claim 24, the combination of references discloses and teaches the at least one computer includes a machine learning model trained with images of pathways with and without obstacles (Fisher, paragraphs 80 and 151, abstract).
With regard to claim 27, Hagen discloses a delivery system comprising:
a pallet sled including a pair of tines extending forward from a base, and a load wheel supporting each of the pair of tines, the load wheels each configured to move away from the respective tine to lift the respective tine upward relative to a floor (Fig. 1, paragraph 35);
a camera on the pallet sled (Fig. 1, paragraph 36); and
at least one computer configured to receive images from the camera, the at least one computer programmed to analyze the images received from the camera, wherein the at least one computer is programmed to analyze pathways traveled by the pallet sled in the images received from the camera (Fig. 1, paragraph 39).
However, Hagen does not disclose determining whether the pathways traveled by the pallet sled have obstacles determining whether the pathways traveled by the pallet sled have obstacles using a machine learning model trained on images of pathways with and without obstacles.
However, Fisher teaches determining whether the pathways traveled by the pallet sled have obstacles determining whether the pathways traveled by the pallet sled have obstacles using a machine learning model trained on images of pathways with and without obstacles (paragraphs 80 and 151).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hagen to include, determining whether the pathways traveled by the pallet sled have obstacles determining whether the pathways traveled by the pallet sled have obstacles using a machine learning model trained on images of pathways with and without obstacles, as taught in Fisher, in order to automatic detect spills (Fisher, paragraph 2).
Claims 8 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2023/0230033 to Hagen et al., in view of Japan Patent Application Publication No. JP 2004145801 to Iwahashi et al.
With regard to claim 8, Hagen discloses a pallet sled comprising:
a base (Fig. 1, 2, paragraph 35);
a pair of tines extending forward from the base (Fig. 1, 2, paragraph 35, Examples include, but are not limited to, shopping carts, pallet jacks, floor cleaners, lifts, autonomous inventory-moving robots, etc.);
a load wheel supporting each of the pair of tines, the load wheels each configured to move away from the respective tine to lift the respective tine upward relative to a support surface on which the load wheel is supported (Fig. 1, 2, paragraph 35, In many cases, the cart 106 can include wheels);
an accelerometer mounted to the pallet sled (Fig. 4, paragraph 56, an accelerometer 440);
a camera mounted to the pallet sled (Fig. 1, paragraph 36); and
at least one processor programmed to record locations from the GPS receiver and images from the camera (Fig. 4, paragraphs 45, 51-53 and 56, The system 400 can include a cart controller 402 with one or more processors 404 and memory 406. The data network 408 can collect components including but not limited to low resolution cameras 412, a high resolution camera 414, a high resolution camera controller 416…inertia measurement unit 436. The inertia measurement unit 436 can include a gyroscope 438 and an accelerometer 440 to detect movement of the mobile apparatus and make the abovementioned determinations. provide information identifying a current location of the apparatus 200, such as the location of the apparatus 200 within an interior space, global positioning coordinates (e.g., GPS coordinates), and/or other location information. The controller 204 can be configured to use the location information for any of a variety of purposes, such as using it in combination with images from the cameras 206 and/or 208 to detect stock conditions for products on shelves 212).
However, Hagen does not disclose the at least one processor programmed to determine elapsed time for certain tasks based upon the locations from the GPS receiver and images from the camera.
However, Iwahashi teaches the at least one processor programmed to determine elapsed time for certain tasks based upon the locations from the GPS receiver and images from the camera (The arrival determination unit 5 determines the current position of the garbage truck 30 detected by the GPS position detector 2. The departure determination unit 6 determines the current position of the garbage truck 30 detected by the GPS position detector 2. The stay time calculation unit 8 calculates the elapsed time from when the arrival determination unit 5 determines that the vehicle has arrived at the work site to when the departure determination unit 6 determines that the vehicle has left the work site. It is calculated as the time spent staying on the ground and output as one piece of work determination information. The captured video is sent from the imaging unit 63 to the work determination information notification unit 7 and transmitted from the vehicle-mounted device 1 to the operation management center 40 as one piece of work determination information. The operation management center 40 may determine whether or not the work has been performed based on the transmitted video. Examiner notes that the elapsed time is determined based on the arrival determination unit and the departure determine unit that detects the location of the garage truck and the elapsed time for a task/work is determined based on the captured video captured from the imaging unit to determine whether the task/work has been performed, which is considered as “at least one processor programmed to determine elapsed time for certain tasks based upon the locations from the GPS receiver and images from the camera”, paragraphs 15-17 and 24).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hagen to include, the at least one processor programmed to determine elapsed time for certain tasks based upon the locations from the GPS receiver and images from the camera, as taught in Iwahashi, in order to determine completion of work at the work place and notify the determination to the management center (Iwahashi, paragraph 7).
With regard to claim 12, Hagen discloses the images are video images (paragraph 3, The image data can take many forms, such as video sequences).
With regard to claim 13, Hagen discloses further including an RFID reader antenna in each the pair of tines (Fig. 4, paragraphs 50-52, RFID sensor 446. the hardware 400 may be integrated into a cart 106, shopping cart 200, or floor-sweeper 300. Examiner notes that a specific place (i.e., tines) to integrate the RFID sensor is merely a design of choice. Therefore, it’s not an inventive step.).
With regard to claim 14, Hagen discloses a method for monitoring operation of a pallet sled including:
a) receiving images from a pallet sled while the pallet sled is delivering items to a store (paragraphs 35-36, The fixed camera 110 can be non-movably affixed to the cart 106 to capture images of the environment around the cart 106.); and
b) analyzing with at least one processor the images received in step a) (paragraph 41, the controller 108 analyzing the second images 142).
However, Hagen does not disclose analyzing images received to determine elapsed time for performance of at least one task.
However, Iwahashi teaches analyzing images received to determine elapsed time for performance of at least one task (The stay time calculation unit 8 calculates the elapsed time from when the arrival determination unit 5 determines that the vehicle has arrived at the work site to when the departure determination unit 6 determines that the vehicle has left the work site. It is calculated as the time spent staying on the ground and output as one piece of work determination information. The captured video is sent from the imaging unit 63 to the work determination information notification unit 7 and transmitted from the vehicle-mounted device 1 to the operation management center 40 as one piece of work determination information. The operation management center 40 may determine whether or not the work has been performed based on the transmitted video. The worker movement integrating unit 10 receives the position of the worker from the portable device 60 from the time when the arrival determination unit 5 determines that the vehicle has arrived to the time when the departure determination unit 6 determines that the vehicle has departed. The stay time calculation unit 8, the sensor value acquisition unit 9, and the worker movement accumulation unit 10 constitute a work determination information notification unit 7. However, the work determination information notification unit 7 may include at least one of the stay time calculation unit 8, the sensor value acquisition unit 9, and the worker movement accumulation unit 10. paragraphs 15-17, 19-20, and 24).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hagen to include, analyzing images received to determine elapsed time for performance of at least one task, as taught in Iwahashi, in order to determine completion of work at the work place and notify the determination to the management center (Iwahashi, paragraph 7).
With regard to claim 15, Hagen discloses step b) further includes identifying SKUs of products in the images received in step a) (paragraph 41).
With regard to claim 16, Hagen discloses determining an inventory level of the products in the images (paragraph 41).
With regard to claim 17, Hagen discloses determining an inventory level of products on shelves in the images (paragraphs 38-39).
With regard to claim 18, Hagen discloses step b) further includes analyzing pathways traveled by the pallet sled in the images (paragraphs 33, 36, and 60).
With regard to claim 19, Hagen discloses determining a condition of a store based upon step b) (paragraph 39).
With regard to claim 20, Hagen discloses the at least one processor uses at least one machine learning model to perform step b) (paragraph 92, Other image data, which can be used for training purposes. Examiner notes that a machine learning model for image analysis is a well-known technique).
Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2023/0230033 to Hagen et al., in view of Japan Patent Application Publication No. JP 2004145801 to Iwahashi et al., and further in view of U.S. Patent No. 10,504,302 to Chavez et al.
With regard to claims 9-11, the combination of references substantially disclose the claimed invention, however, the combination of references does not disclose the at least one processor is programmed to record images from the camera based upon signals from an accelerometer exceeding a threshold; and the at least one processor is programmed to synchronize images from the camera with signals from an accelerometer.
However, Chavez teaches the at least one processor is programmed to record images from the camera based upon signals from an accelerometer exceeding a threshold (The vehicle's 100 operator applies the brakes and starts slowing down at a rate of −12.5 m/s.sup.2. The accelerometer would detect that −12.5 m/s.sup.2 exceeds the activation threshold of −12 m/s.sup.2 and send the signal to activate the data recordation system (for example, send a signal to the video or still camera to start data recordation), col. 2, lines 32-57 ); and the at least one processor is programmed to synchronize images from the camera with signals from an accelerometer (If the vehicle's 100 speed decreases a threshold amount in less than or equal to a specified period of a time the activation sensor 106 will send a signal to the computer 102. The computer 102 will then activate a data recordation system. col. 2, lines 32-57).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of references to include, the at least one processor is programmed to record images from the camera based upon signals from an accelerometer exceeding a threshold; and the at least one processor is programmed to synchronize images from the camera with signals from an accelerometer, as taught in Chavez, in order to create video or still photo data in response to detecting an accident (Chavez, col. 1, lines 35-36).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2023/0230033 to Hagen et al., in view of U.S. Patent Application Publication No. 2006/0255950 to Roeder.
With regard to claim 21, Hagen discloses a pallet sled comprising:
a base (Fig. 1, 2, paragraph 35);
a first tine and a second tine extending forward from the base (Fig. 1, 2, paragraph 35, Examples include, but are not limited to, shopping carts, pallet jacks, floor cleaners, lifts, autonomous inventory-moving robots, etc. ); and
a load wheel supporting each of the first tine and the second tine, the load wheels each configured to move away from the respective tine to lift the respective tine upward relative to a support surface on which the load wheel is supported (Fig. 1, 2, paragraph 35, In many cases, the cart 106 can include wheels).
However, Hagen does not disclose a first RFID reader antenna mounted in the first tine; a second RFID reader antenna mounted in the second tine; and wherein the pallet sled is configured to use the first RFID reader antenna and the second RFID antenna to distinguish which pallet is on which of the first tine and the second tine.
However, Roeder teaches aa first RFID reader antenna mounted in the first tine; a second RFID reader antenna mounted in the second tine; and wherein the pallet sled is configured to use the first RFID reader antenna and the second RFID antenna to distinguish which pallet is on which of the first tine and the second tine ( FIG. 8A provides a top view schematic of a lift truck 105 with antennas 801, 802 and antennas 803, 804 mounted on double-length tines 810a and 810b, respectively. The antenna system 100 may be mounted on a single tine (as shown in FIG. 1) or on both tines of the forklift. In some environments forklifts incorporate double-length (or longer) tines for carrying two (or more) sets of pallets. It would be desirable to automatically and effectively read both sets of pallets and to distinguish between the two sets. Fig. 8a, paragraphs 17 and 69).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hagen to include, a first RFID reader antenna mounted in the first tine; a second RFID reader antenna mounted in the second tine; and wherein the pallet sled is configured to use the first RFID reader antenna and the second RFID antenna to distinguish which pallet is on which of the first tine and the second tine, as taught in Roeder, in order to enhance efficiency (Roeder, paragraph 7).
Claims 25-26 and 28-30 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2023/0230033 to Hagen et al., in view of Japan Patent Application Publication No. JP 2004145801 to Iwahashi et al., and further in view of Japan Patent Application Publication No. CA 3,234,716 to Schoening.
With regard to claims 25 and 28, the combination of references substantially disclose the claimed invention, however, the combination of references does not disclose one of the certain tasks is moving between a truck and a store; and determining elapsed time between a truck and a store by analyzing the images.
However, Schoening teaches one of the certain tasks is moving between a truck and a store (a product 13 in the storage and shipping environment needs to be relocated to a particular bin 14 or delivered to a particular loading bay 16 and placed on a truck. Examiner notes that the shelves storage can be considered as “a store”, paragraph 106); determining elapsed time between a truck and a store by analyzing the images (each time a product is picked up and subsequently dropped off, a pick up time-stamp 116 and a drop off time-stamp 118 are recorded and stored in the product and order database 27 by the tracking application 36. Examiner notes that a pick-up time-stamp and a drop off time-stamp is determined based on the picking and dropping operation of the forklift, wherein the picking and dropping operation of the forklift is detected based on optical detection devices and/or weight sensors, which is considered as “determining elapsed time between a truck and a store by analyzing the images”., Paragraphs 95 and 105-106).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of references to include, one of the certain tasks is moving between a truck and a store; and determining elapsed time between a truck and a store by analyzing the images, as taught in Schoening, in order to manage the tracking of and shipping of products in a storage or warehouse environment (Schoening, paragraph 1).
With regard to claim 29, the combination of references substantially disclose the claimed invention, however, the combination of references does not disclose one of the certain tasks is stocking shelves in a store, as determined as after unloading begins until unloading ends, as determined by the camera or a weight sensor.
However, Schoening teaches one of the certain tasks is stocking shelves in a store, as determined as after unloading begins until unloading ends, as determined by the camera or a weight sensor (The worker may then use a delivery or transport vehicle to pick up the particular product and drop off the particular product at a desired location within the storage facility. each of the forklifts 18 includes a sensor based detection device 40 (which may be, for example, a laser-based detection
device, an optical detection device, etc.) disposed on the front of the forklift 18 and
positioned to detect the existence of a product 13 loaded on the forklift 18, i.e., loaded on or positioned on the lift or tongs of the forklift 18, and to detect the existence of a product 13. However, other types of sensors besides lasers could be used in or for the detection device 40 including, for example, weight sensors on the forklift 18, electromagnetic sensors that use other wavelengths of electromagnetic energy to detect the presence of product on or near the forklift 18, sonic detectors, optical detection devices, etc. In particular, system records a first weight when the product 13 is
first picked up and a second weight when the product 13 is delivered., paragraphs 2 and 95).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of references to include, one of the certain tasks is stocking shelves in a store, as determined as after unloading begins until unloading ends, as determined by the camera or a weight sensor, as taught in Schoening, in order to manage the tracking of and shipping of products in a storage or warehouse environment (Schoening, paragraph 1).
With regard to claims 26 and 30, the combination of references substantially disclose the claimed invention, however, the combination of references does not disclose one of the certain tasks is checking in, as determined as after the pallet sled is in a store before pallet unloading begins as determined by the camera or a weight sensor or the pair of tines are lowered; and determining elapsed time for check-in in a store by analyzing the images and based upon a change in weight on the pallet sled as items are removed.
However, Schoening teaches one of the certain tasks is checking in, as determined as after the pallet sled is in a store before pallet unloading begins as determined by the camera or a weight sensor or the pair of tines are lowered; and determining elapsed time for check-in in a store by analyzing the images and based upon a change in weight on the pallet sled as items are removed (By requiring each forklift operator to sign-in, or login, prior to using the system, the user
interface device 23, along with the centralized asset tracking and management device 26 can track when a particular forklift operator picked up a particular product 13 at a particular bay 14 and when the particular forklift operator dropped off the particular product 13 at a particular truck or loading bay 16, through the use of time-stamps., paragraphs 95 and 105).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of references to include, one of the certain tasks is checking in, as determined as after the pallet sled is in a store before pallet unloading begins as determined by the camera or a weight sensor or the pair of tines are lowered; and ; and determining elapsed time for check-in in a store by analyzing the images and based upon a change in weight on the pallet sled as items are removed, as taught in Schoening, in order to manage the tracking of and shipping of products in a storage or warehouse environment (Schoening, paragraph 1).
Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2023/0230033 to Hagen et al., in view of U.S. Patent Application Publication No. 2017/0355081 to Fisher et al. and further in view of U.S. Patent No. 10,504,302 to Chavez et al.
With regard to claim 31, the combination of references disclose further including an accelerometer mounted to the pallet sled (Hagen, Fig. 4, paragraph 56, an accelerometer 440) and to determine a condition of the store based upon the cleanliness level (Fisher, paragraphs 80 and 151) and the inventory level of the products on shelves in the images (Hagen, paragraphs 38-39), however, the combination of references does not disclose wherein the at least one computer is programmed to record images from the camera based upon signals from the accelerometer exceeding a threshold.
However, Chavez teaches wherein the at least one computer is programmed to record images from the camera based upon signals from the accelerometer exceeding a threshold (The vehicle's 100 operator applies the brakes and starts slowing down at a rate of −12.5 m/s.sup.2. The accelerometer would detect that −12.5 m/s.sup.2 exceeds the activation threshold of −12 m/s.sup.2 and send the signal to activate the data recordation system (for example, send a signal to the video or still camera to start data recordation), col. 2, lines 32-57 ); and the at least one processor is programmed to synchronize images from the camera with signals from an accelerometer (If the vehicle's 100 speed decreases a threshold amount in less than or equal to a specified period of a time the activation sensor 106 will send a signal to the computer 102. The computer 102 will then activate a data recordation system. col. 2, lines 32-57).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of references to include, wherein the at least one computer is programmed to record images from the camera based upon signals from the accelerometer exceeding a threshold, as taught in Chavez, in order to create video or still photo data in response to detecting an accident (Chavez, col. 1, lines 35-36).
Response to Arguments
Applicants' pre-appeal filed on 12/26/2025 have been fully considered but they are not fully persuasive especially in light of the new art used in the rejections.
Applicants remark that “the combination of references does not disclose wherein the at least one computer is programmed to determine aisles having dirty areas using a machine learning model trained on labeled images of store aisles with and without dirty areas”.
Examiner directs Applicants' attention to the office action above.
Applicants remark that “the combination of references does not disclose a first RFID reader antenna mounted in the first tine… wherein the pallet sled is configured to use the first RFID reader antenna and the second RFID antenna to distinguish which pallet is on which of the first tine and the second tine”.
Examiner directs Applicants' attention to the office action above.
Applicants remark that “nothing in Iwahashi would suggest monitoring the elapsed time of carts or other user-powered apparatus with a store”.
Examiner does not agree. Iwahashi teaches the stay time calculation unit 8 calculates the elapsed time from when the arrival determination unit 5 determines that the vehicle has arrived at the work site to when the departure determination unit 6 determines that the vehicle has left the work site. (paragraphs 15-17, 19-20, and 24) Examiner notes that monitoring the elapsed time of the vehicle begins loading/upload task in the working site, which is considered as “monitoring the elapsed time of carts or other user-powered apparatus with a store”.
Conclusion
Please refer to form 892 for cited references.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication from the examiner should be directed to Ariel Yu whose telephone number is 571-270-3312. The examiner can normally be reached on Monday-Friday 9:00am-5:00pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Obeid Fahd A can be reached on 571-270-3324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ARIEL J YU/Primary Examiner, Art Unit 3627