DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 8/4/2025 have been fully considered but they are not persuasive.
Claims 17-38 are pending in this application and have been considered below.
Arguments:
The applicant argues, with respect to the independent claims, that in Eckman the cameras are not directly above the wight sensor. The applicant argues that the cameras are upstream from the weight sensor.
Response:
The examiner and their supervisors respectfully disagrees. The applicant concedes that Eckman discloses the cameras are above the weight sensor. Therefore, the limitation that the cameras are “directed toward an area above the platform weight sensor” is the claim limitation in contention. Under broadest reasonable interpretation (MPEP 2111 ), claims only require cameras "mounted about a periphery" and "directed toward an area above." There is no requirement for co-location with weight sensor, nor does it require the cameras to be physically mounted on top of the scale, nor does it requires the lens to be perpendicular to the floor. Eckman disclose a camera mounted on a frame periphery looking at the items on the scale, which is “directed toward the area above” the scale. Eckman's configurable system satisfies claims limitations by disclosing: the weight scale (116) and profiling frame (106) operate on the same line. Even if the frame is slightly offset, the cameras are configured to capture the pallet as it moves. In a continuous, configurable conveyor system (which Eckman discloses), the “area above the weight sensor” and the “profiling area” are functionally integrated zones. Eckman’s ¶54 angled paddles providing "different angles/views"; ¶78 confirms configurable frame arrangements, and ¶90 shows movable weight sensor. Furthermore, the secondary reference CubiScan teaches the benefit of a “360-degree view” and overhead sensing. Combining Eckman’s profiling frame with CubiScan’s comprehensive measurement zone would make it obvious to align the camera field of view with the weighting zone to correlate wight and visual data simultaneously.
Arguments:
The applicant argues Townsend does not teach the elements with respect to 21, 23, and 24.
Response:
The applicant’s arguments regarding Townsend are moot. The examiner has withdrawn the rejection based on Townsend and has cited new prior art, ZetesMedea, which shows a video of gate pivotable between open and closed positions with integrated cameras (see ZetesMedea video at t=4s, 6s, and 12s). The newly cited prior art renders the applicant’s arguments with respect to claims 21, 23, and 24 moot in view of new ground(s) of rejection.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 17-18, and 27 is/are rejected under 35 U.S.C. 102(a)(1) and (2) as being anticipated by Eckman et al. (US 2021/0133666 A1 – hereinafter “Eckman”).
Claim 17.
Eckman disclose a validation system comprising:
a platform weight sensor (¶14 discloses “The system can further include a weight scale that is
positioned along the conveyor belts and that is configured to record a weight of the pallet as it moves down the conveyor belts.”; Fig. 1, 116 & ¶56);
a plurality of cameras mounted about a periphery of the platform weight sensor and directed toward an area above the platform weight sensor (¶9 discloses “the plurality of cameras being configured to capture images of a pallet as the pallet passes through the opening of the pallet profiling frame”);
and at least one computer receiving images from the plurality of cameras and weight information from the platform weight sensor (¶9 discloses “pallet profiling computer system that is configured to receive the images captured by the plurality of cameras”; ¶14 discloses “The information provided to the warehouse management system can include the weight for the pallet.”).
Claim 18.
Eckman disclose the validation system of claim 17 wherein the at least one computer is programmed to analyze the images and identify a plurality of SKUs associated with a stacked plurality of items in the images (¶11 discloses “The type of goods can be identified based, at least in part, on optical identification of one or more goods identifying markings from the images of the pallet … The one or more goods identifying markings can include a barcode identifying a SKU corresponding to the type of goods (emphasis added).”; ¶93 discloses “SKUs”).
Claim 27.
Eckman disclose the elements recited in claim 27 for at least the reasons discussed in claim 17 and 18 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 20, 22, 25, 37, and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eckman in view of CubiScan 1200-AKL brochure (hereinafter “CubiScan”).
Claim 20.
Eckman disclose the validation system of claim 17 further including a (Fig. 1, 106 & ¶56), wherein the plurality of cameras are mounted to the (¶9 discloses “a plurality of cameras mounted to the pallet profiling frame”), wherein the plurality of cameras includes four cameras (Fig. 1, 108A-N and/or 118A-N; ¶54).
Eckman discloses teaches a camera-based pallet inspection system except for specifically teaching a “bullpen.” The bullpen is interpreted as an area such as a temporary holding area for packaged goods before further processing or shipment. However, CubiScan in the same field of endeavor teaches a “bullpen” as shown in the figures below. The CubiScan system provides an explicit teaching of comprehensive, multi-sided data capture. Its "overhead-mounted sensor configuration" is described as allowing for "360 degree access to the measuring area" and the ability to measure freight in any orientation. The system uses imaging that pass over the freight to scan a "three-dimensional measurement area" in conjunction with a “optional floor-mounted deck scale.” This demonstrates that the concept of capturing a complete, 360-degree data profile of a large object for logistics purposes was a known and commercially practiced solution well before the priority date.
PNG
media_image1.png
494
500
media_image1.png
Greyscale
PNG
media_image2.png
409
359
media_image2.png
Greyscale
Therefore, it would have been obvious to one of ordinary skill in the art to combine Eckman and CubiScan before the effective filing date of the claimed invention. Eckman teaches a camera-based pallet inspection system with four cameras. The CubiScan brochure teaches that in this art, it is a known technique to perform a comprehensive, 360-degree scan of an object for logistics purposes. A person of ordinary skill in the art (PHOSITA) would have been motivated to configure the plurality of cameras in Eckman's system to capture four different sides to achieve the known benefit of complete data capture for improved validation. This improved validation is achieved by creating a comprehensive, 360-degree view of the pallet, with minimizes the risk of missed or misidentified items that may be hidden from a single viewpoint. Capturing all sides ensures a more accurate SKU count to compare against the shipping manifest, allows for errors to be corrected before the pallet is wrapped and shipped, and creates a complete visual record to resolve any subsequent customer disputes regarding the shipment’s contents.
Claim 22.
The combination of Eckman and CubiScan discloses the validation system of claim 17 wherein the plurality of cameras includes three cameras mounted to a fixed structure proximate the platform weight sensor (Eckman Fig. 1, 108A-N and/or 118A-N; ¶54; where, the choice of three cameras is supported by at least three places to mount a camera and the choice of three cameras is a matter of choice which a person of ordinary skill in the art would have found obvious; Fig. 1, weight scale 116 & ¶56. CubiScan discloses a frame surrounding a weight sensor for mounting Eckman’s cameras.).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Eckman and CubiScan before the effective filing date of the claimed invention for at least the reasons discussed above for claim 20, mutatis mutandis.
Claim 25.
Eckman disclose the validation system of claim 17 wherein the plurality of cameras are configured such that each takes an image simultaneously or in sequence (¶55 discloses “can continuously take pictures of each item 102 (e.g., boxes, food , other goods) on a pallet 114 from multiple perspectives and angles as the pallet 114 moves along on the conveyor belt 104.”; CubiScan discloses “Captures … data simultaneously”).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Eckman and CubiScan before the effective filing date of the claimed invention for at least the reasons discussed above for claim 20, mutatis mutandis.
Claim 37. (New)
The combination of Eckman and CubiScan disclose the validation system of claim 17 wherein the plurality of cameras are configured such that each takes an image in sequence (Eckman ¶55 discloses “can continuously take pictures of each item 102 (e.g., boxes, food , other goods) on a pallet 114 from multiple perspectives and angles as the pallet 114 moves along on the conveyor belt 104.”; Fig. 21 and ¶¶203-205 describe a sequence of taking pictures – steps 2100, 2102, 2104, 2106; CubiScan discloses “Captures … data simultaneously”).
Claim 38. (New)
The combination of Eckman and CubiScan disclose the validation system of claim 17 wherein the plurality of cameras includes four cameras (Fig. 1, 108A-N, where the A-N designation discloses at least four cameras; ¶55: “can continuously take pictures … on a pallet 114 from multiple perspectives and angles.”) and wherein each of the four cameras is configured to generate an image of a different face of an object on the platform weight sensor (Cubiscan discloses “360 degree access to the measuring area … Measures freight in any orientation … Captures dimensional and reweigh data simultaneously” and “The CubiScan 1200-AKL is a large-scale static dimension scanning device that can work in conjunction with a heavy-capacity floor scale (or in a stand-alone position when weight is not required). Its overhead-mounted sensor configuration provides a comprehensive view of the freight measurement area while allowing access from any direction.”).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Eckman and CubiScan to configure the four cameras of Eckman to each capture a different face of the object. The motivation is to minimize the risk of missed or misidentified items and ensure a complete visual record. This is a predictable use of prior art elements according to their established functions.
Claims 21, 23, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eckman in view of CubiScan, further in view of ZetesMedea Dock Door Control Press Release (2014) in conjunction with ZetesMedea dock door control video (2013) (hereinafter “ZetesMedea”).
Claim 21.
The combination of Eckman and CubiScan disclose the validation system of claim 20 wherein the bullpen includes (Eckman ¶14 discloses “groups of the cameras to capture the images based on the position of the pallet along the conveyor belts as indicated by signals from the optical gateways.”; where, a gateway is a gate that is a space or area along a belt through which a pallet must pass).
Eckman discloses all of the subject matter as described above except for specifically teaching “a gate pivotable between an open position and a closed position.” However, ZetesMedea in the same field of endeavor teaches “a gate pivotable between an open position and a closed position” (ZetesMedea Dock Door Control video, t = 4, 6, and 12 seconds; https://www.youtube.com/watch?v=i2ynsC1RneI).
PNG
media_image3.png
655
1161
media_image3.png
Greyscale
PNG
media_image4.png
646
1160
media_image4.png
Greyscale
PNG
media_image5.png
645
1155
media_image5.png
Greyscale
Therefore, it would have been obvious to one of ordinary skill in the art to combine Eckman, CubiScan, and ZetesMedea before the effective filing date of the claimed invention. Eckman teaches the foundational a camera-based pallet inspection designed to reduce errors and inefficiency by using camera and a weight sensor to check pallet contents against a pick sheet. A PHOSITA could recognize the value of this automated check but would also seek to improve its thoroughness.
CubiScan teaches the benefits of a more complete data capture method. CubiScan’s system has the ability to “dimension freight quickly and accurately” by providing a “comprehensive view” and a “360 degree access to the measuring area.” A PHOSITA would be motivated to incorporate this principle of a 360-degree data capture into Eckman’s validation system to ensure no items are missed, achieving a more robust and reliable initial validation.
Finally, with a comprehensively validated pallet, the ZetesMedea press release and the accompanying visual evidence teach that it was known years before the priority date to implement such an inspection system with "Cameras installed at the dock door" to automatically control the loading process to “eliminate human errors” and ensure “100% accurate shipments” by preventing operators from “loading the wrong items.” A PHOSITA would have been motivated to modify the stationary frame of Eckman into a movable gate with integrated cameras, as explicitly taught and shown by the Zetes system, to achieve the predictable result of a more efficient, automated inspection portal. This combination would predictably result in the automated system that validates pallet contents with high accuracy and eliminates human error leading to faster, more efficient, and error-free logistics operations.
Claim 23.
The combination of Eckman and CubiScan discloses the validation system of claim 22 wherein the three cameras (Eckman Fig. 1, 108A-N and/or 118A-N; ¶54; where, the choice of three cameras is obvious to a PHOSITA) are mounted to a bullpen (CubiScan discloses a frame surrounding a temporary holding area for packaged goods before further processing or shipment i.e. bullpen), the validation system further(CubiScan p. 1 discloses “the measuring area”).
Eckman and CubiScan discloses all of the subject matter as described above except for specifically teaching “including a fourth camera mounted to a gate.” However, ZetesMedea in the same field of endeavor teaches “including a fourth camera mounted to a gate” (p. 1 discloses “Using ImageID technology ZetesMedea Dock Door Control automates the pallet control process at the dock door.”; p. 2 “Cameras installed at the dock door detect and analyse pallet labels instantly as goods are loaded. This triggers a go/no go signal, preventing operators from loading the wrong items on the wrong vehicle.” Also see the promotional video published on YouTube in 2014 that provides an video of the dock door control process. It shows a camera system at a dock door control gate scanning pallets at a gate and providing immediate feedback to the operator, visually demonstrating the integrated system in action. https://www.youtube.com/watch?v=i2ynsC1RneI).
PNG
media_image3.png
655
1161
media_image3.png
Greyscale
PNG
media_image4.png
646
1160
media_image4.png
Greyscale
PNG
media_image5.png
645
1155
media_image5.png
Greyscale
Therefore, it would have been obvious to one of ordinary skill in the art to combine Eckman, CubiScan, and ZetesMedea before the effective filing date of the claimed invention. The combination Eckman, CubiScan, and ZetesMedea renders claim(s) 23 obvious for the reasons discussed above for claim 21, mutatis mutandis.
Claim 24.
The combination of Eckman, CubiScan, and ZetesMedea discloses the validation system of claim 23 wherein the at least one computer (Eckman ¶9 discloses “pallet profiling computer system) is configured to control opening and closing of the gate (ZetesMedea p. 2 “Cameras installed at the dock door detect and analyse pallet labels instantly as goods are loaded. This triggers a go/no go signal, preventing operators from loading the wrong items on the wrong vehicle.” ZetesMedea Dock Door Control video, t = 4, 6, and 12 seconds; https://www.youtube.com/watch?v=i2ynsC1RneI).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Eckman, CubiScan, and ZetesMedea before the effective filing date of the claimed invention. The combination Eckman, CubiScan, and ZetesMedea renders claim(s) 24 obvious for the reasons discussed above for claim 21, mutatis mutandis.
Claims 19, 26, 28, and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eckman in view of Dal Mutto et al. (US 2020/0372625 A1 – hereinafter “Dal Mutto”).
Claim 19.
Eckman discloses the validation system of claim 18 wherein the at least one computer includes a machine learning model trained on images of packages (Eckman ¶7 discloses “analysis of pallet images using any of a variety of machine learning and/or other appropriate techniques to generate accurate results … trained based on machine learning”)
Eckman discloses all of the subject matter as described above except for specifically teaching “beverage containers.” However, Dal Mutto in the same field of endeavor teaches “beverage containers” (¶134 discloses “depicts packages stacked on a pallet by training a neural network in accordance with labeled training data that includes representative images (e.g., captured in a warehouse setting) of packages that are on a pallet and packages”; ¶140 discloses “logistics facility may process a wide range of different types of goods, such as … beverages”).
It would have been obvious to a person of ordinary skill in the art to modify the machine learning system of Eckman to include models trained on “beverage containers” as taught by Dal Mutto. Eckman disclose a validation system that relies on machine learning to optically identify various “types of goods” and “SKUs” (¶11). A person of ordinary skill recognizes that the accuracy of a machine learning model is directly dependent on the relevance of its training data to the actual inventory being processed. Dal Mutto, in the same field of logistics automation, teaches that warehouses process “a wide range of different types of goods, such as … beverages” (¶48) and teaches the necessity of “training a neural network in accordance with labeled training data” that includes representative images of these packages (¶134). Therefore, a PHOSITA would be motivated to incorporate Dal Mutto’s specific training class (beverages) into Eckman’s general identification system. The predictable results is a validation system with enhanced accuracy and robustness specifically optimized for identifying beverage SKUs – a common but distinct category of freight that requires specific visual training data duo to unique packaging (e.g. bottles, cans, multipacks) compared to standard cardboard boxes. This is a simple substitution of one known training set for another to obtain the predictable results of identifying that specific item type.
Claim 26.
The combination of Eckman and Dal Mutto disclose the elements recited in claim 26 for at least the reasons discussed in claim 28 below.
Claim 28.
The combination of Eckman and Dal Mutto discloses the validation system of claim 27 wherein the at least one computer includes at least one machine learning model trained on images of packages of beverage containers (Dal Mutto ¶134 discloses “depicts packages stacked on a pallet by training a neural network in accordance with labeled training data that includes representative images (e.g., captured in a warehouse setting) of packages that are on a pallet and packages”; ¶140 discloses “logistics facility may process a wide range of different types of goods, such as … beverages”), wherein the at least one computer is programmed to analyze the images to identify at least one SKU using the at least one machine learning model (Eckman ¶7 discloses “analysis of pallet images using any of a variety of machine learning and/or other appropriate techniques to generate accurate results”; ¶11 discloses “The type of goods can be identified based, at least in part, on optical identification of one or more goods identifying markings from the images of the pallet … The one or more goods identifying markings can include a barcode identifying a SKU corresponding to the type of goods (emphasis added).”).
The motivation to combine is the same as claim 31.
Claim 31
Eckman disclose a validation system (¶7 discloses “automated pallet profiling system”) comprising:
a platform (¶8 discloses “a conveyor belt or other mechanism that moves pallets”);
a plurality of cameras mounted about a periphery of the platform (¶9 discloses “a plurality of cameras mounted to the pallet profiling frame”; Fig. 1) and directed toward an area above the platform (¶9 discloses “the plurality of cameras being configured to capture images of a pallet as the pallet passes through the opening of the pallet profiling frame”); and
at least one computer storing at least one machine learning model trained on images of packages (¶9 discloses “pallet profiling computer system”; ¶7 discloses “analysis of pallet images using any of a variety of machine learning and/or other appropriate techniques to generate accurate results … trained based on machine learning”), the at least one computer configured to receive images from the plurality of cameras (¶9 discloses “pallet profiling computer system that is configured to receive the images captured by the plurality of cameras”), the at least one computer programmed to identify at least one SKU associated with at least one item in the images using the at least one machine learning model (¶7 discloses “analysis of pallet images using any of a variety of machine learning and/or other appropriate techniques to generate accurate results”; ¶11 discloses “The type of goods can be identified based, at least in part, on optical identification of one or more goods identifying markings from the images of the pallet … The one or more goods identifying markings can include a barcode identifying a SKU corresponding to the type of goods (emphasis added).”).
Eckman discloses all of the subject matter as described above except for specifically teaching “beverage containers.” However, Dal Mutto in the same field of endeavor teaches “beverage containers” (¶134 discloses “depicts packages stacked on a pallet by training a neural network in accordance with labeled training data that includes representative images (e.g., captured in a warehouse setting) of packages that are on a pallet and packages”; ¶140 discloses “logistics facility may process a wide range of different types of goods, such as … beverages”).
It would have been obvious to a person of ordinary skill in the art to modify the machine learning system of Eckman to include models trained on “beverage containers” as taught by Dal Mutto. Eckman disclose a validation system that relies on machine learning to optically identify various “types of goods” and “SKUs” (¶11). A person of ordinary skill recognizes that the accuracy of a machine learning model is directly dependent on the relevance of its training data to the actual inventory being processed. Dal Mutto, in the same field of logistics automation, teaches that warehouses process “a wide range of different types of goods, such as … beverages” (¶48) and teaches the necessity of “training a neural network in accordance with labeled training data” that includes representative images of these packages (¶134). Therefore, a PHOSITA would be motivated to incorporate Dal Mutto’s specific training class (beverages) into Eckman’s general identification system. The predictable results is a validation system with enhanced accuracy and robustness specifically optimized for identifying beverage SKUs – a common but distinct category of freight that requires specific visual training data duo to unique packaging (e.g. bottles, cans, multipacks) compared to standard cardboard boxes. This is a simple substitution of one known training set for another to obtain the predictable results of identifying that specific item type.
Claims 29-30 and 32-36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eckman in view of Dal Mutto in view of CubiScan.
Claims 29 and 32.
The combination of Eckman, Dal Mutto, and CubiScan disclose the elements recited in claim 29 for at least the reasons discussed in claim 25, mutatis mutandis.
Claims 30 and 33.
The combination of Eckman, Dal Mutto, and CubiScan disclose the elements recited in claim 30 for at least the reasons discussed in claim 37, mutatis mutandis.
Claim 34. (New)
The combination of Eckman and Dal Mutto disclose the validation system of claim 31 wherein the plurality of cameras includes three cameras (Eckman Fig. 1, 108A-N), wherein two of the three cameras are oriented in opposite directions (Eckman ¶55 discloses “multiple perspectives and angles”) and a third of the three cameras is oriented in a direction perpendicular to the two of the three cameras (CubiScan teaches the benefit of "comprehensive" and "360 degree access to the measuring area.”).
Eckman and Dal Mutto do not explicitly state the specific orientation of the cameras. However, CubiScan in the same field of endeavor teaches the benefit of "comprehensive" and "360 degree access to the measuring area." It would have been obvious to a person of ordinary skill in the art (PHOSITA) to arrange the cameras of Eckman to achieve the "comprehensive view" taught by CubiScan to capture the sides of a rectangular pallet to create a 360-degree view, the cameras must necessarily be arranged around the periphery. Configuring two cameras to face opposite sides (e.g., left and right faces) and a third camera to face a perpendicular side (e.g., front face) is an obvious design choice to capture the requisite images of the pallet's vertical faces. As noted in the rejection of Claim 22, the choice of three cameras is a matter of choice which a person of ordinary skill in the art would have found obvious.
Claim 35. (New)
The combination of Eckman, Dal Mutto, and CubiScan disclose the validation system of claim 34 further including a fourth camera (Eckman Fig. 1, 108A-N; where the A-N designation discloses at least four cameras), wherein each of the four cameras is configured to generate an image of a different face of a stack of items on a pallet (CubiScan teaches the motivation to capture a complete “360 degree access to the measuring area").
CubiScan teaches the motivation to capture a complete “360 degree access to the measuring area" to minimize the risk of missed or misidentified items that may be hidden from a single viewpoint. It would have been obvious to a PHOSITA to utilize the four cameras disclosed by Eckman and orient them such that each captures a different face (front, back, left, right) of the stack of items. This modification allows for the comprehensive, 360-degree view explicitly taught by CubiScan to ensure no items are missed.
Claim 36. (New)
The combination of Eckman and Dal Mutto disclose the validation system of claim 31 wherein the plurality of cameras includes four cameras (Eckman Fig. 1, 108A-N; where the A-N designation discloses at least four cameras; ¶55: "continuously take pictures .. .
from multiple perspectives and angles"15 .) and wherein each of the four cameras is configured to generate an image of a different face of a stack of items on a pallet (CubiScan teaches the motivation to capture a complete “360 degree access to the measuring area").
The rejection follows the rationale set forth in Claim 35 above. Eckman discloses the
hardware (four cameras) and CubiScan discloses the motivation (360-degree view). A
PHOSITA would be motivated to configure Eckman's four cameras to face the four different
sides of the pallet to achieve the known benefit of complete data capture taught by
CubiScan.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ross Varndell whose telephone number is (571)270-1922. The examiner can normally be reached M-F, 9-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’Neal Mistry can be reached at (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ross Varndell/Primary Examiner, Art Unit 2674