Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1 – 20 are pending in this application. Claims 1 and 18 are independent.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1 – 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bennett, Jesse William (US-20190271582-A1, hereinafter simply referred to as Bennett).
Regarding independent claim(s) 1, Bennett teaches:
A method for detecting a cargo within a cargo compartment (See at least Bennett, ¶ [0025]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…"), the method comprising: capturing an interior of the cargo compartment containing the cargo as an image using at least one camera (See at least Bennett, ¶ [0025, 0030, 0090]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…", "…a camera associated with the computing device takes an image of the trailer interior…") and automatically image processing captured image information (See at least Bennett, ¶ [0025, 0030]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…when the trailer is loaded, the edge detection may detect interruptions in the edges and can use the interruptions of the various planes within the container in order to determine the position and size of the load…"), wherein an algorithm (e.g., algorithm of Bennett) underlying the image processing includes operator for edge detection (e.g., edge detection of Bennett) (See at least Bennett, ¶ [0025, 0030]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…when the trailer is loaded, the edge detection may detect interruptions in the edges and can use the interruptions of the various planes within the container in order to determine the position and size of the load…"), wherein the program part or operator determines edges of the cargo from the image information that is visible and recognizable from a location of the at least one camera (See at least Bennett, ¶ [0025, 0030, 0149]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…when the trailer is loaded, the edge detection may detect interruptions in the edges and can use the interruptions of the various planes within the container in order to determine the position and size of the load…", "…As with FIG. 3 above, a camera may be located at the rear of the trailer projecting forward…the camera may be at the front of the trailer or on the top of the trailer…"); fitting (e.g., via edge comparisons of Bennett), also via the algorithm, a first model (i.e., merely a graphical representation in para [0029] of Applicant’s PG PUB) of the cargo described by a current edge configuration of the cargo (e.g., detected edges of load/boxes on the trailer floor of Bennett) into a second model described by the edges of the cargo compartment (e.g., detected edges of container of Bennett) by performing a comparison of the recognizable edges of the cargo (e.g., detected edges within the captured image of Bennett) and of edge lengths of the recognizable edges of the cargo (e.g., the length of the edges of Bennett) (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…") with: the edges of the cargo compartment in an unladen state (e.g., state at which the container is empty in Bennett) and edge lengths of the edges of the cargo compartment in the unladen state (e.g., an empty container of Bennett) (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…"); and, determining the cargo space occupied by the cargo on a basis of said fitting (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…").
Regarding independent claim(s) 18, Bennett teaches:
A device (e.g., FIG. 1 of Bennett) comprising: a data processing unit (e.g., FIG. 1, #120 of Bennett) including a computing unit (e.g., FIG. 1, #120 of Bennett) and a non-transitory computer readable storage medium (e.g., computer readable medium of Bennett) having program code stored thereon; a camera (e.g., camera of Bennett) mounted in a cargo compartment (e.g., container of Bennett), said camera being configured to capture an interior of the cargo compartment containing the cargo (See at least Bennett, ¶ [0025, 0030, 0090]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…", "…a camera associated with the computing device takes an image of the trailer interior…") and to transfer captured image information to a computing unit contained in a data processing device for automatic image processing (See at least Bennett, ¶ [0025, 0030, 0090]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…", "…a camera associated with the computing device takes an image of the trailer interior…"); wherein an image processing algorithm is stored on said non-transitory computer readable storage medium, wherein said image processing algorithm includes program parts or operators for edge recognition (e.g., edge detection of Bennett) (See at least Bennett, ¶ [0025, 0030]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…when the trailer is loaded, the edge detection may detect interruptions in the edges and can use the interruptions of the various planes within the container in order to determine the position and size of the load…"), for fitting (e.g., via edge comparisons of Bennett) a cargo model (i.e., merely a graphical representation in para [0029] of Applicant’s PG PUB) into a cargo compartment model and for comparing edges and edge lengths of the cargo (e.g., the length of the edges of Bennett) and the unladen or pre-loaded cargo compartment (e.g., empty container of Bennett) (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…"), and from this determines a cargo space occupied by the cargo (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…"); said program code, when executed by said computing unit, being configured to: capture the interior of the cargo compartment containing the cargo as an image using said camera and automatically image processing captured image information (See at least Bennett, ¶ [0025, 0030, 0090]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…", "…a camera associated with the computing device takes an image of the trailer interior…"), wherein an algorithm underlying the image processing includes a program part or operator for edge detection (e.g., edge detection of Bennett) (See at least Bennett, ¶ [0025, 0030]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…when the trailer is loaded, the edge detection may detect interruptions in the edges and can use the interruptions of the various planes within the container in order to determine the position and size of the load…"), wherein the program part or operator determines the edges of the cargo from the image information that is visible and recognizable from a location of the camera (See at least Bennett, ¶ [0025, 0030]; FIGS. 3 – 9, 11; "…The present disclosure further provides a computing device for cargo load detection in a container…", "…camera based load detection may be used to find edges within a container or trailer…when the trailer is loaded, the edge detection may detect interruptions in the edges and can use the interruptions of the various planes within the container in order to determine the position and size of the load…"); fit, also via the algorithm, a first model of the cargo described by the current edge configuration of the cargo (e.g., detected edges of load/boxes on the trailer floor of Bennett) into a second model described by the edges of the cargo compartment (e.g., detected edges of container of Bennett) by performing a comparison of the recognizable edges of the cargo (e.g., detected edges within the captured image of Bennett) and of the edge lengths of the recognizable edges of the cargo (e.g., the length of the edges of Bennett) (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…") with: the edges of the cargo compartment in an unladen state and edge lengths of the edges of the cargo compartment in the unladen state (e.g., state at which the container is empty in Bennett) (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…"); and, determine the cargo space occupied by the cargo on a basis of said fitting (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…").
Regarding dependent claim 2, Bennett teaches:
wherein the cargo space occupied by the cargo determined according to the method is displayed as a graphical representation on a display device or an interface (See at least Bennett, ¶ [0035]; FIGS. 3 – 9, 11; "…Such computing device or network node may include any type of electronic device, including but not limited to, mobile devices such as smartphones or cellular telephones…").
Regarding dependent claim 3, Bennett teaches:
wherein, using and starting from the graphical representation, interventions in or changes to the edge detection, implemented by the program part or the operator, and calibrations can be carried out (See at least Bennett, ¶ [0085]; FIGS. 3 – 9, 11; "…in the embodiment of FIG. 4, dot array 410 will project the dots to the calibrated positions for an empty trailer and therefore a comparison at either the computing device or server will determine that the trailer is empty…").
Regarding dependent claim 4, Bennett teaches:
wherein manual interventions, changes or calibrations can be carried out (See at least Bennett, ¶ [0085, 0092]; FIGS. 3 – 9, 11; "…in the embodiment of FIG. 4, dot array 410 will project the dots to the calibrated positions for an empty trailer and therefore a comparison at either the computing device or server will determine that the trailer is empty…", "…if a higher-resolution dot array such as a 10×10 dot array was provided, this would give better details of the load, shape and position. Thus, in general, with the increase of the number of dots, the detection accuracy will increase…").
Regarding dependent claim 5, Bennett teaches:
wherein the interior of the cargo compartment containing the cargo is captured as the image using at least one camera under illumination with visible or infrared light (See at least Bennett, ¶ [0069]; FIGS. 3 – 9, 11; "…a combination of a light array and a camera may be used for a load detection system. In particular, in one embodiment an infrared (IR) light array is utilized…").
Regarding dependent claim 6, Bennett teaches:
wherein the algorithm underlying the image processing is configured to display additional lines (e.g., dots of Bennett), projected into the cargo compartment, in the graphical representation on the display device or interface (See at least Bennett, ¶ [0085, 0092]; FIGS. 3 – 9, 11; "…in the embodiment of FIG. 4, dot array 410 will project the dots to the calibrated positions for an empty trailer and therefore a comparison at either the computing device or server will determine that the trailer is empty…", "…if a higher-resolution dot array such as a 10×10 dot array was provided, this would give better details of the load, shape and position. Thus, in general, with the increase of the number of dots, the detection accuracy will increase…").
Regarding dependent claim 7, Bennett teaches:
wherein the algorithm underlying the image processing is configured to display grid lines (e.g., an array of dots of Bennett) projected into the cargo compartment, in the graphical representation on the display device or interface (See at least Bennett, ¶ [0011, 0044]; FIGS. 3 – 9, 11; "…FIG. 5 is a front perspective of the inside of a loaded container having a light array projected onto a surface thereof…", "…Light array 170 may project an array of dots in any light spectrum, including visible light, ultra-violet (UV) light, or infra-red (IR) light…").
Regarding dependent claim 8, Bennett teaches:
wherein the cargo space occupied by the cargo determined according to the method is provided as processable information for storage in data processing systems (See at least Bennett, ¶ [0040]; FIGS. 3 – 9, 11; "…computing device 110 may access data or programmable logic from an external storage medium (not shown), for example through communications subsystem 130…").
Regarding dependent claim 9, Bennett teaches:
wherein the processable information includes digital information (See at least Bennett, ¶ [0038]; FIGS. 3 – 9, 11; "…subsystem 130 may include a processing module such as a digital signal processor (DSP) or System on Chip (SOC)…").
Regarding dependent claim 10, Bennett teaches:
wherein the cargo space occupied by the cargo determined according to the method is provided as processable information for use in control devices and for use and processing within a data communication system (See at least Bennett, ¶ [0152, 0181]; FIGS. 3 – 9, 11; "…As seen in FIG. 16, a load 1610 blocks part of edge 1620. Based on the length of edge 1620 and the proportion of such edge that has been blocked, the size and location of the load may be determined…", "…The method of any one of clauses AA to JJ, further comprising: detecting edges within the captured image; comparing the detected edges with edges in a reference image; and using a difference in the detected edges and the edges in the reference image to determine cargo loading within the container…").
Regarding dependent claim 11, Bennett teaches:
wherein an implementation of the method for detecting a cargo is triggered in an event-driven manner (See at least Bennett, ¶ [0127]; FIGS. 3 – 9, 11; "…the IR array receives a trigger. Such trigger may be any signal or sensor reading which may cause the load determination algorithm to be started…").
Regarding dependent claim 12, Bennett teaches:
wherein an implementation of the method for detecting a cargo is triggered by sensor signals triggered by loading operations or cargo compartment openings (See at least Bennett, ¶ [0127]; FIGS. 3 – 9, 11; "…the IR array receives a trigger. Such trigger may be any signal or sensor reading which may cause the load determination algorithm to be started…").
Regarding dependent claim 13, Bennett teaches:
wherein, in a case of an event triggering an implementation of the method for detecting a cargo or in a case that determination of the cargo space occupied by the cargo exceeds predetermined threshold values (See at least Bennett, ¶ [0060, 0127]; FIGS. 3 – 9, 11; "…server 240 may receive information from a sensor apparatus associated with various trailers or cargo containers, providing information such as warnings…", "…the IR array receives a trigger. Such trigger may be any signal or sensor reading which may cause the load determination algorithm to be started... For example, the trigger at block 1212 may be a signal from a server asking the computing device in the container for a load status.…"), a warning message or a warning signal is output via a display device or interface (See at least Bennett, ¶ [0060, 0127]; FIGS. 3 – 9, 11; "…server 240 may receive information from a sensor apparatus associated with various trailers or cargo containers, providing information such as warnings…", "…the IR array receives a trigger. Such trigger may be any signal or sensor reading which may cause the load determination algorithm to be started... For example, the trigger at block 1212 may be a signal from a server asking the computing device in the container for a load status.…").
Regarding dependent claim 14, Bennett teaches:
wherein the image processing and the associated algorithm are implemented at least in part as an app for a data processing device (See at least Bennett, ¶ [0035]; FIGS. 3 – 9, 11; "…Such computing device or network node may include any type of electronic device, including but not limited to, mobile devices such as smartphones or cellular telephones…"), a graphic representation, if any, takes place on the data processing device and interventions in or changes to the method sequence as well as calibrations are configured to be carried out from the data processing device (See at least Bennett, ¶ [0085, 0092]; FIGS. 3 – 9, 11; "…in the embodiment of FIG. 4, dot array 410 will project the dots to the calibrated positions for an empty trailer and therefore a comparison at either the computing device or server will determine that the trailer is empty…").
Regarding dependent claim 15, Bennett teaches:
wherein the data processing devices include at least one of mobile phones and tablets (See at least Bennett, ¶ [0035]; FIGS. 3 – 9, 11; "…Such computing device or network node may include any type of electronic device, including but not limited to, mobile devices such as smartphones or cellular telephones…").
Regarding dependent claim 16, Bennett teaches:
wherein structures determinable by the at least one camera and recognizable as the edges for the edge recognition are configured to be generated by light beams projected into the cargo compartment or by line-shaped markings applied to the walls or floor of the cargo compartment (See at least Bennett, ¶ [0085, 0092]; FIGS. 3 – 9, 11; "…in the embodiment of FIG. 4, dot array 410 will project the dots to the calibrated positions for an empty trailer and therefore a comparison at either the computing device or server will determine that the trailer is empty…", "…if a higher-resolution dot array such as a 10×10 dot array was provided, this would give better details of the load, shape and position. Thus, in general, with the increase of the number of dots, the detection accuracy will increase…").
Regarding dependent claim 17, Bennett teaches:
wherein the display device or interface is a screen (See at least Bennett, ¶ [0035]; FIGS. 3 – 9, 11; "…Such computing device or network node may include any type of electronic device, including but not limited to, mobile devices such as smartphones or cellular telephones…").
Regarding dependent claim 19, Bennett teaches:
A vehicle (e.g., FIG. 3, #310) comprising the device of claim 12 (See at least Bennett, ¶ [0071]; FIGS. 3 – 9, 11; "…In the embodiment of FIG. 3, example truck trailer 310 is shown…").
Regarding dependent claim 20, Bennett teaches:
wherein the vehicle is at least one of a commercial vehicle, a truck, and a trailer (See at least Bennett, ¶ [0071]; FIGS. 3 – 9, 11; "…In the embodiment of FIG. 3, example truck trailer 310 is shown…").
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: See the Notice of References Cited (PTO–892)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IDOWU O OSIFADE whose telephone number is (571)272-0864. The Examiner can normally be reached on Monday-Friday 8:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s Supervisor, ANDREW MOYER can be reached on (571) 272 – 9523. The fax phone number for the organization where this application or proceeding is assigned is (571) 273 – 8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov.
Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217 – 9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786 – 9199 (IN USA OR CANADA) or (571) 272 – 1000.
/IDOWU O OSIFADE/Primary Examiner, Art Unit 2675