DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office action is in response to the amendments filed on December 06, 2024. Claims 1-13 are currently pending, with Claims 1-13 being amended.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application DE10 2022 205 674.4, filed on June 02, 2022. Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to the declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. No action by the applicant is required at this time.
Acknowledgement is made of Applicant’s claim for priority to the PCT filing of Application No. PCT/EP2023/063544, filed on May 22, 2023. No action by the applicant is required at this time.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on December 02, 2024, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication No. 2020/0225673 A1, to Ebrahimi Afrouzi, et al (hereinafter referred to as Afrouzi).
As per Claim 1, Afrouzi discloses the features of a method for controlling an industrial truck (100) in a warehouse (2) (e.g. Paragraphs [0159], [0349], [0418]; where the robot may include a controller/ processor, which controls operation of one or more components of the robot based on environmental characteristics inferred from sensory data), comprising:
reading in (S1) a point cloud of the warehouse (2) in the surroundings (4) of the industrial truck (100) (e.g. Paragraphs [0216], [0312]; where the processor of the robot may construct a point cloud map of two dimensional or three dimensional points by transforming each of the vectors into a vector space with a shared origin to determine parameters of the environment), wherein
the point cloud comprises point information of at least one object (10) present in the surroundings (4) (e.g. Paragraphs [0155], [0210], [0289]; where the depth camera on the robot provides data to an image processor for depth sensing, obstacle detection, presence detection, etc. for obstacles),
converting (S3) the point information into a frequency distribution of the point information (e.g. Paragraphs [0505]-[0506]; where a vector space model is used for representing histograms of word frequencies associated with the metadata of a digital image, and converting them word histograms to visual words);
determining (S4) a statistical distribution parameter of the point information in the frequency distribution (e.g. Paragraphs [0199], [0295], [0312]; Figures 35B-D, 62A-C; where the processor of the robot may use a statistical test to filter out points from the point cloud data, such as determining an aggregate or mean value or variance);
classifying (S5) the object (10) as a person (12) or a warehouse technology object (14) on the basis of the statistical distribution parameter (e.g. Paragraphs [0157], [0219], [0353], [0363]; where the classification unit is configured to recognize objects under different conditions and may classify an object as being movable or dynamic, and may identify a particular person as occupying an area); and
emitting (S6) a control signal for controlling the industrial truck (100) in the warehouse (2) on the basis of a classification resulting from the classification step (S5) (e.g. Paragraphs [0157]-[0158], [0184], [0278]; where the processor of the robot may determine actions for the robot to execute based on the classification unit classifying the object).
As per Claim 2, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein the point cloud read in comprises information of at least two objects (10) present in the surroundings (4) (e.g. Paragraphs [0155]-[0156]; where the robot may process image data to identify objects or faces in the image), and wherein the method comprises as a further step
a segmenting (S2) of the point cloud read in, into at least one segment which, of the point information from the point cloud read in, comprises point information associated with one object (10) of the at least two objects (10) (e.g. Paragraphs [0189], [078]; where the processor may use image-base segmentation methods to separate objects from one another), and wherein
in the conversion step (S3) the associated point information is converted into the frequency distribution (e.g. Paragraphs [0216], [0319], [0290]; where the system may construct a map using point cloud data by transforming each of the vectors into a vector space within a shared origin, and may transform an intensity of observed data into a classification problem; and where the processor uses a probability distribution value indicating how likely the robot is in a particular region and updates the distribution based on an observation probability distribution map).
As per Claim 3, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein determining the statistical distribution parameter is performed on the basis of a scatter parameter in the frequency distribution (e.g. Paragraph [0363], [0389]; where the processor may use a real-time classifier to identify the chance of traversing an area, by adjusting bias and variance with respect to a uniform distribution).
As per Claim 4, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein
converting the point information includes converting the point information into a histogram (e.g. Paragraphs [0505]-[0506]; where a vector space model is used for representing histograms of word frequencies associated with the metadata of a digital image, and converting them word histograms to visual words), wherein
determining the statistical distribution parameter is performed in the histogram (e.g. Paragraphs [0199], [0295], [0312]; Figures 35B-D, 62A-C; where the processor of the robot may use a statistical test to filter out points from the point cloud data, such as determining an aggregate or mean value or variance), and wherein
the distribution parameter is determined in the histogram on the basis of a scatter parameter (e.g. Paragraph [0363], [0389]; where the processor may use a real-time classifier to identify the chance of traversing an area, by adjusting bias and variance with respect to a uniform distribution).
As per Claim 5, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein
the point information of the point cloud comprises spatial point co-ordinates of the at least one object (10) present in the surroundings (4) (e.g. Paragraph [0300]; where the processor may determine a probability density function for localizing the robot using spatial coordinates), and wherein,
converting the spatial point coordinates includes converting spatial point coordinates into the frequency distribution (e.g. Paragraph [0207]; where the processor may transform the vectors into a shared coordinate system).
As per Claim 6, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein
the point information of the point cloud read in comprises signal intensities of measurement signals reflected at the at least one object (10) present in the surroundings (4) for detecting the point cloud (e.g. Paragraphs [0272], [0352], [0556]-[0557]; where the depth sensor may use active or passive depth sensing methods, such as IR reflection intensity, and the intensity of the transmitter may be increased with the speed of the robot to observe at higher speeds), and wherein,
in the step (S3) of converting includes converting signal intensities into the frequency distribution (e.g. Paragraphs [0216], [0319], [0290]; where the system may construct a map using point cloud data by transforming each of the vectors into a vector space within a shared origin, and may transform an intensity of observed data into a classification problem; and where the processor uses a probability distribution value indicating how likely the robot is in a particular region and updates the distribution based on an observation probability distribution map).
As per Claim 7, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein
the point information of the point cloud read in contains spatial speed co-ordinates of the at least one object (10) present in the surroundings (4) (e.g. Paragraphs [0272], [0352], [0556]-[0557]; where the depth sensor may use active or passive depth sensing methods, such as IR reflection intensity, and the intensity of the transmitter may be increased with the speed of the robot to observe at higher speeds), and wherein,
in the step (S3) of converting includes transferring spatial speed coordinates into the frequency distribution (e.g. Paragraphs [0216], [0319], [0290]; where the system may construct a map using point cloud data by transforming each of the vectors into a vector space within a shared origin, and may transform an intensity of observed data into a classification problem; and where the processor uses a probability distribution value indicating how likely the robot is in a particular region and updates the distribution based on an observation probability distribution map).
As per Claim 8, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein
the object is classified as a person (e.g. Paragraphs [0157], [0219], [0353], [0363]; where the classification unit is configured to recognize objects under different conditions and may classify an object as being movable or dynamic, and may identify a particular person as occupying an area) and wherein
emitting the control signal includes sending a control signal for interrupting a working task being carried out by the industrial truck (100) to a working device (30) of the industrial truck (100) (e.g. Paragraphs [0157]-[0158], [0284], [0539]; where the processor of the robot may determine actions for the robot to execute based on the classification unit classifying the object; and where the robot may receive signals to interrupt operations, such as booting up, or may stop the robot when a person is detected walking quickly by the robot).
As per Claim 9, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein
the object is classified as a warehouse technology object (14) (e.g. Paragraphs [0157], [0195], [0219], [0249]; where the classification unit is configured to recognize objects under different conditions and may classify an object as being movable or dynamic or static, and may identify furniture, obstacles, static objects, walls, etc.) and
emitting the control signal includes emitting a control signal for limiting a drive dynamics of the industrial truck (100) (e.g. Paragraphs [0284], [0335], [0407], [0539]; where the processor of the robot may determine actions for the robot to execute based on the classification unit classifying the object; and where the robot may receive signals to interrupt operations, such as booting up, or may stop the robot when a person is detected walking quickly by the robot; or the processor may instruct the robot to approach the object at a particular angle and/or driving speed).
As per Claim 10, Afrouzi discloses the features of Claim 1, and Afrouzi further discloses the features of wherein
classifying the object (10) is carried out on the basis of a machine learning model (e.g. Paragraphs [0156], [0363]; where the identification of the object that is included in the image is trained through a deep learning method), wherein
at least one of the frequency distribution and the statistical distribution parameter is read into the machine learning model (e.g. Paragraphs [0369], [0397]; where the processor determines areas in which to operate based on data from prior work sessions and updates the movement plan based on new data; and the machine learning algorithm may be used to learn the features of different types of objects extracted from sensor data such that the machine learning algorithm may identify the most likely type of object observed at a location).
As per Claim 11, Afrouzi discloses the features of Claim 4, and Afrouzi further discloses the features of wherein
classifying the object (10) is carried out on the basis of the machine learning model (e.g. Paragraphs [0156], [0363]; where the identification of the object that is included in the image is trained through a deep learning method), wherein
the histogram is read into the machine learning model (e.g. Paragraphs [0505]-[0506]; where a vector space model is used for representing histograms of word frequencies associated with the metadata of a digital image, and converting them word histograms to visual words).
As per Claim 12, Afrouzi discloses the features of a control unit (110) for controlling an industrial truck (100) in a warehouse (2) (e.g. Paragraphs [0159], [0349], [0418]; where the robot may include a controller/ processor, which controls operation of one or more components of the robot based on environmental characteristics inferred from sensory data), comprising:
an interface (112) configured for reading-in a point cloud of the warehouse (2) in the surroundings (4) of the industrial truck (100) (e.g. Paragraphs [0216], [0312]; where the processor of the robot may construct a point cloud map of two dimensional or three dimensional points by transforming each of the vectors into a vector space with a shared origin to determine parameters of the environment), wherein
the point cloud read in contains point information of at least one object (10) present in the surroundings (4) (e.g. Paragraphs [0155], [0210], [0289]; where the depth camera on the robot provides data to an image processor for depth sensing, obstacle detection, presence detection, etc. for obstacles); and
an interface (114) for emitting a control signal for controlling the industrial truck (100) in the warehouse (2) on the basis of the resulting classification (e.g. Paragraphs [0157]-[0158], [0184], [0278]; where the processor of the robot may determine actions for the robot to execute based on the classification unit classifying the object); wherein
the control unit (110) (e.g. Paragraph [0418], [0447], [0600]; where the processor controls operation of one or more components of the robot based on environmental characteristics inferred from sensory data) is configured to:
convert the point information into a frequency distribution of the point information (e.g. Paragraphs [0505]-[0506]; where a vector space model is used for representing histograms of word frequencies associated with the metadata of a digital image, and converting them word histograms to visual words),
determine a statistical distribution parameter of the point information in the frequency distribution (e.g. Paragraphs [0199], [0295], [0312]; Figures 35B-D, 62A-C; where the processor of the robot may use a statistical test to filter out points from the point cloud data, such as determining an aggregate or mean value or variance),
classify the object (10) in a category as a person (12) or as a warehouse technology object (14) on the basis of the statistical distribution parameter (e.g. Paragraphs [0157], [0219], [0353], [0363]; where the classification unit is configured to recognize objects under different conditions and may classify an object as being movable or dynamic, and may identify a particular person as occupying an area).
As per Claim 13, Afrouzi discloses the features of Claim 12, and Afrouzi further discloses the features of an industrial truck (100) with the control unit (110) for controlling the industrial truck (100) (e.g. Paragraph [0418], [0447], [0600]; where the processor controls operation of one or more components of the robot based on environmental characteristics inferred from sensory data).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Armstrong-crews, et al (U.S. 2022/0135074 A1), teaches a method for acquiring and segmenting point cloud data to navigate an autonomous vehicle.
Englard, et al (U.S. 2019/0180502 A1), which teaches a method for processing point cloud data to sense an environment in which the vehicle is moving.
Smolyanskiy, et al (U.S. 2021/0342608 A1), which teaches a method for segmentation of LIDAR images.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERRITT E LEVY whose telephone number is (571)270-5595. The examiner can normally be reached Mon-Fri 0630-1600.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MERRITT E LEVY/Examiner, Art Unit 3666
/TIFFANY P YOUNG/Primary Examiner, Art Unit 3666