Prosecution Insights
Last updated: April 19, 2026
Application No. 18/794,667

METHODS, SYSTEMS, APPARATUSES, AND COMPUTER PROGRAM PRODUCTS FOR FOLLOWING AN EDGE OF AN OBJECT

Non-Final OA §103
Filed
Aug 05, 2024
Examiner
KASPER, BYRON XAVIER
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Assurant, Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
72 granted / 103 resolved
+17.9% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
36 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 103 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This communication is responsive to Application No. 18/794,667 and the preliminary amendments filed on 8/9/2024. 3. Claims 1-21 are presented for examination. Information Disclosure Statement 4. The information disclosure statements (IDS) submitted on 10/29/2024, 11/22/2024, 2/24/2025, 4/25/2025, 8/25/2025, and 10/6/2025 have been considered by the examiner. 5. The Applicant has submitted six information disclosure statements (IDSs), comprising 89 pre-grant publications, 67 patents, 30 foreign patent documents, and 49 non-patent literature (NPL) references, totaling 235 documents, which is excessive for any Examiner to have to consider at any level beyond a cursory review. In accord with dicta from Molins PLC v. Textron, Inc., 48 F.3d 1172 (Fed. Cir. 1995), stating that forcing the Examiner to find "a needle in a haystack" is "probative of bad faith." Id. [The Molins] case presented a situation where the disclosure was in excess of 700 pages and contained more than fifty references. Likewise, the instant application’s IDSs include more references than even the Molins case, and these particularly long IDSs do not include any concise explanation of the relevance of any of the listed references nor cite any pages, columns, and lines (or paragraph numbers) where relevant passages or relevant figures appear. According to MPEP Section 2004 “Aids to Compliance With Duty of Disclosure [R-08.2012]”, “It is desirable to avoid the submission of long lists of documents if it can be avoided. Eliminate clearly irrelevant and marginally pertinent cumulative information. If a long list is submitted, highlight those documents which have been specifically brought to Applicant’s attention and/or are known to be of most significance.” Additionally, per MPEP Section 609.04(a)(III): “applicants are encouraged to provide a concise explanation of why the English-language information is being submitted and how it is understood to be relevant. Concise explanations (especially those which point out the relevant pages and lines) are helpful to the Office, particularly where documents are lengthy and complex and applicant is aware of a section that is highly relevant to patentability or where a large number of documents are submitted and applicant is aware that one or more are highly relevant to patentability.” See Penn Yan Boats, Inc. v. Sea Lark Boats, Inc., 359 F. Supp. 948, 175 USPQ 260 (S.D. Fla. 1972), aff’d, 479 F.2d 1338, 178 USPQ 577 (5th Cir. 1973), cert. denied, 414 U.S. 874 (1974). But cf. Molins PLC v.Textron Inc., 48 F.3d 1172, 33 USPQ2d 1823 (Fed. Cir. 1995). As such, even though these IDSs have been placed in the application file with the lists of references marked as considered, and the compilation of those listed PG Publications and Patent references have at least been key-word searched and/or classification searched for relevant prior art, the information referred to therein for each individual reference has admittedly not been fully considered beyond a cursory review. If Applicant wishes to have one or more references fully considered, the Examiner requests resubmitting the IDSs with a reasonable number of references that are known to be pertinent for the determination of patentability as defined by 37 C.F.R. § 1.56, along with the concise explanations as to relevance and citations explaining the locations of relevant passages or figures, as per 37 CFR 1.98(a)(3) and 37 CFR § 1.105. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claim(s) 1, 8, 11, 18, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oota et al. (US 20180370027 A1 hereinafter Oota) in view of Diankov et al. (US 20200130963 A1 hereinafter Diankov). Regarding Claim 1, Oota teaches a computer-implemented method for following at least a portion of an edge of an object via an edge-following system, the computer-implemented method comprising an edge-following operation, wherein the edge-following operation comprises at least: causing a handling tool associated with a multi-axis robot to engage the object ([0035] via “As shown in FIG. 2, the robot 210 includes a robot hand 201, the posture of which is controlled to various positions and angles. The robot 200, for example, grips in series the workpieces 50 being a plurality of objects to be inspected that are prepared in a workpiece storage space. The robot hand 201 can change the position and posture of the gripped workpiece 50.”); defining a working point on the edge of the object ([0037] via “The control device 300 causes the camera 210 to image in each imaging point set on the surface to be inspected of the workpiece 50, while moving the robot hand 201 gripping the workpiece 50, along a movement route including a plurality of imaging points set on the surface to be inspected so that the surface to be inspected of the workpiece 50 is entirely covered by a plurality of images imaged by the camera 210.”), (Note: See Figures 4A-4E and 7 of Oota wherein the imaging points (interpreted to be a working point) are located on the edges of the workpiece.), wherein the working point is kept at a predetermined working offset from an ancillary tool associated with the edge-following system ([0038] via “In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. … In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”); and causing movement of the multi-axis robot to manipulate the object via the handling tool while maintaining the predetermined working offset continuously between the ancillary tool and the working point ([0038] via “In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. … Thus, the accuracy of the flaw inspection can be improved by, in addition to the imaging in which the imaging region including the imaging point is perpendicular to the optical axis of the camera 210 (and the illumination light of the illumination 220), for example, as shown in FIG. 4C, adjusting the orientation of the workpiece 50 gripped by the robot hand 201, by the operation of the robot hand 201, for example, as shown in FIG. 4D or FIG. 4E, so that, in the same imaging point, the imaging region including the imaging point has an angle that is not perpendicular to the optical axis of the camera 210 and the illumination light of the illumination 220. In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”), ([0041] via “The movement operation control unit 330 moves the robot hand 201 on the basis of the movement route of the robot hand 201 calculated by the imaging position information setting unit 310. Thereby, the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is controlled so that all imaging points included in the imaging position information set by the imaging position information setting unit 310 are covered by the imaging points where imaging is performed by the camera 210.”). Oota is silent on determining one or more dimensional attributes associated with the object; causing movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object; wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side. However, Diankov teaches determining one or more dimensional attributes associated with the object ([0041] via “In some embodiments, the destination crossing sensor 316 can be used to measure a height of the target object 112 during transfer. … The robotic system 100 can compare the gripper height 322 to a crossing reference height 324 (e.g., a known vertical position of the destination crossing sensor 316 and/or a reference line/plane thereof) to calculate an object height 320 of the target object 112 that is being transferred.”); causing movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330. Also, the robotic system 100 can derive the scanning maneuver 404 for rotating the target object 112 to present multiple surfaces thereof to the scanning sensor 330.”), ([0056] via “Also, as an illustrative example, the robotic system 100 can derive scanning maneuver 414 for moving the recognized object along the x-axis and/or the y-axis and/or for rotating the object about the z-axis.”), (Note: See Figures 4A-4D of Diankov as well.); wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side ([0024] via “In some embodiments, the task can include manipulation (e.g., moving and/or reorienting) of a target object 112 (e.g., one of the packages, boxes, cases, cages, pallets, etc. corresponding to the executing task) from a start location 114 to a task location 116.”), ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330.”), (Note: See Figures 4A-4D and 6A-6F of Diankov as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Diankov wherein the edge-following operation comprises at least: determining one or more dimensional attributes associated with the object; causing movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object; wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side. Doing so allows the system to accurately recognize all dimensions and surfaces of the object for further object processing, as stated by Diankov ([0054] via “The robotic system 100 can operate the scanning sensor 330 based on placing the end effector 304 at the scanning position 412 and/or based on implementing the scanning maneuver 414. … Thus, based on the scanning position 412 and/or the scanning maneuver 414, the robotic system 100 can present multiple surfaces/portions of the unrecognized objects and increase the likelihood of accurately locating and scanning identifiers on the unrecognized objects.”). Regarding Claim 8, modified reference Oota teaches the computer-implemented method of claim 1, but is silent on wherein causing the movement of the multi-axis robot to manipulate the object comprises executing a sequence of alternately translating the object linearly along one or more of an x-axis or a y-axis of a particular coordinate plane and rotating the object based in part on one or more points of rotation to capture image data associated with the plurality of sides and the corners comprised in the edge of the object. However, Diankov teaches wherein causing the movement of the multi-axis robot to manipulate the object comprises executing a sequence of alternately translating the object linearly along one or more of an x-axis or a y-axis of a particular coordinate plane and rotating the object based in part on one or more points of rotation to capture image data associated with the plurality of sides and the corners comprised in the edge of the object ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330. Also, the robotic system 100 can derive the scanning maneuver 404 for rotating the target object 112 to present multiple surfaces thereof to the scanning sensor 330.”), ([0056] via “Also, as an illustrative example, the robotic system 100 can derive scanning maneuver 414 for moving the recognized object along the x-axis and/or the y-axis and/or for rotating the object about the z-axis.”), (Note: See Figures 4A-4D of Diankov as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Diankov wherein causing the movement of the multi-axis robot to manipulate the object comprises executing a sequence of alternately translating the object linearly along one or more of an x-axis or a y-axis of a particular coordinate plane and rotating the object based in part on one or more points of rotation to capture image data associated with the plurality of sides and the corners comprised in the edge of the object. Doing so allows the system to accurately recognize all dimensions and surfaces of the object for further object processing, as stated by Diankov ([0054] via “The robotic system 100 can operate the scanning sensor 330 based on placing the end effector 304 at the scanning position 412 and/or based on implementing the scanning maneuver 414. … Thus, based on the scanning position 412 and/or the scanning maneuver 414, the robotic system 100 can present multiple surfaces/portions of the unrecognized objects and increase the likelihood of accurately locating and scanning identifiers on the unrecognized objects.”). Regarding Claim 11, Oota teaches an edge-following system for following an edge of an object, the edge-following system comprising at least one processor and at least one non-transitory memory including computer-coded instructions thereon, the computer-coded instructions, when executed by the at least one processor ([0077] via “The function blocks included in the machine learning device 10, the control device 300, and the flaw inspection device 400 are described above. In order to realize these function blocks, the machine learning device 10, the control device 300, and the flaw inspection device 400 include an operation processing device such as a central processing unit (CPU). The machine learning device 10, the control device 300, and the flaw inspection device 400 also include a sub storage device such as a hard disk drive (HDD) stored with various control programs such as application software and an operating system (OS), and a main storage device such as a random access memory (RAM) for storing data temporarily required for execution of the program by the operation processing device.”), ([0096] via “The program may be stored by using various types of non-transitory computer readable media, and supplied to the computer.”), cause the edge-following system to: cause a handling tool associated with a multi-axis robot to engage the object ([0035] via “As shown in FIG. 2, the robot 210 includes a robot hand 201, the posture of which is controlled to various positions and angles. The robot 200, for example, grips in series the workpieces 50 being a plurality of objects to be inspected that are prepared in a workpiece storage space. The robot hand 201 can change the position and posture of the gripped workpiece 50.”); define a working point on the edge of the object ([0037] via “The control device 300 causes the camera 210 to image in each imaging point set on the surface to be inspected of the workpiece 50, while moving the robot hand 201 gripping the workpiece 50, along a movement route including a plurality of imaging points set on the surface to be inspected so that the surface to be inspected of the workpiece 50 is entirely covered by a plurality of images imaged by the camera 210.”), (Note: See Figures 4A-4E and 7 of Oota wherein the imaging points (interpreted to be a working point) are located on the edges of the workpiece.), wherein the working point is kept at a predetermined working offset from an ancillary tool associated with the edge-following system ([0038] via “In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. … In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”); and cause movement of the multi-axis robot to manipulate the object via the handling tool while maintaining the predetermined working offset continuously between the ancillary tool and the working point ([0038] via “In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. … Thus, the accuracy of the flaw inspection can be improved by, in addition to the imaging in which the imaging region including the imaging point is perpendicular to the optical axis of the camera 210 (and the illumination light of the illumination 220), for example, as shown in FIG. 4C, adjusting the orientation of the workpiece 50 gripped by the robot hand 201, by the operation of the robot hand 201, for example, as shown in FIG. 4D or FIG. 4E, so that, in the same imaging point, the imaging region including the imaging point has an angle that is not perpendicular to the optical axis of the camera 210 and the illumination light of the illumination 220. In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”), ([0041] via “The movement operation control unit 330 moves the robot hand 201 on the basis of the movement route of the robot hand 201 calculated by the imaging position information setting unit 310. Thereby, the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is controlled so that all imaging points included in the imaging position information set by the imaging position information setting unit 310 are covered by the imaging points where imaging is performed by the camera 210.”). Oota is silent on to determine one or more dimensional attributes associated with the object; cause movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object; wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side. However, Diankov teaches to determine one or more dimensional attributes associated with the object ([0041] via “In some embodiments, the destination crossing sensor 316 can be used to measure a height of the target object 112 during transfer. … The robotic system 100 can compare the gripper height 322 to a crossing reference height 324 (e.g., a known vertical position of the destination crossing sensor 316 and/or a reference line/plane thereof) to calculate an object height 320 of the target object 112 that is being transferred.”); cause movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330. Also, the robotic system 100 can derive the scanning maneuver 404 for rotating the target object 112 to present multiple surfaces thereof to the scanning sensor 330.”), ([0056] via “Also, as an illustrative example, the robotic system 100 can derive scanning maneuver 414 for moving the recognized object along the x-axis and/or the y-axis and/or for rotating the object about the z-axis.”), (Note: See Figures 4A-4D of Diankov as well.); wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side ([0024] via “In some embodiments, the task can include manipulation (e.g., moving and/or reorienting) of a target object 112 (e.g., one of the packages, boxes, cases, cages, pallets, etc. corresponding to the executing task) from a start location 114 to a task location 116.”), ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330.”), (Note: See Figures 4A-4D and 6A-6F of Diankov as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Diankov wherein the edge-following system is caused to: determine one or more dimensional attributes associated with the object; cause movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object; wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side. Doing so allows the system to accurately recognize all dimensions and surfaces of the object for further object processing, as stated by Diankov ([0054] via “The robotic system 100 can operate the scanning sensor 330 based on placing the end effector 304 at the scanning position 412 and/or based on implementing the scanning maneuver 414. … Thus, based on the scanning position 412 and/or the scanning maneuver 414, the robotic system 100 can present multiple surfaces/portions of the unrecognized objects and increase the likelihood of accurately locating and scanning identifiers on the unrecognized objects.”). Regarding Claim 18, modified reference Oota teaches the edge-following system of claim 11, but is silent on wherein causing the movement of the multi-axis robot to manipulate the object comprises executing a sequence of alternately translating the object linearly along one or more of an x-axis or a y-axis of a particular coordinate plane and rotating the object based in part on one or more points of rotation to capture image data associated with the plurality of sides and the corners comprised in the edge of the object. However, Diankov teaches wherein causing the movement of the multi-axis robot to manipulate the object comprises executing a sequence of alternately translating the object linearly along one or more of an x-axis or a y-axis of a particular coordinate plane and rotating the object based in part on one or more points of rotation to capture image data associated with the plurality of sides and the corners comprised in the edge of the object ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330. Also, the robotic system 100 can derive the scanning maneuver 404 for rotating the target object 112 to present multiple surfaces thereof to the scanning sensor 330.”), ([0056] via “Also, as an illustrative example, the robotic system 100 can derive scanning maneuver 414 for moving the recognized object along the x-axis and/or the y-axis and/or for rotating the object about the z-axis.”), (Note: See Figures 4A-4D of Diankov as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Diankov wherein causing the movement of the multi-axis robot to manipulate the object comprises executing a sequence of alternately translating the object linearly along one or more of an x-axis or a y-axis of a particular coordinate plane and rotating the object based in part on one or more points of rotation to capture image data associated with the plurality of sides and the corners comprised in the edge of the object. Doing so allows the system to accurately recognize all dimensions and surfaces of the object for further object processing, as stated by Diankov ([0054] via “The robotic system 100 can operate the scanning sensor 330 based on placing the end effector 304 at the scanning position 412 and/or based on implementing the scanning maneuver 414. … Thus, based on the scanning position 412 and/or the scanning maneuver 414, the robotic system 100 can present multiple surfaces/portions of the unrecognized objects and increase the likelihood of accurately locating and scanning identifiers on the unrecognized objects.”). Regarding Claim 21, Oota teaches at least one non-transitory computer-readable storage medium for following an edge of an object, the at least one non-transitory computer-readable storage medium having computer program code stored thereon that, in execution with at least one processor, configures the at least one processor ([0077] via “The function blocks included in the machine learning device 10, the control device 300, and the flaw inspection device 400 are described above. In order to realize these function blocks, the machine learning device 10, the control device 300, and the flaw inspection device 400 include an operation processing device such as a central processing unit (CPU). The machine learning device 10, the control device 300, and the flaw inspection device 400 also include a sub storage device such as a hard disk drive (HDD) stored with various control programs such as application software and an operating system (OS), and a main storage device such as a random access memory (RAM) for storing data temporarily required for execution of the program by the operation processing device.”), ([0096] via “The program may be stored by using various types of non-transitory computer readable media, and supplied to the computer.”) to: cause a handling tool associated with a multi-axis robot to engage the object ([0035] via “As shown in FIG. 2, the robot 210 includes a robot hand 201, the posture of which is controlled to various positions and angles. The robot 200, for example, grips in series the workpieces 50 being a plurality of objects to be inspected that are prepared in a workpiece storage space. The robot hand 201 can change the position and posture of the gripped workpiece 50.”); define a working point on the edge of the object ([0037] via “The control device 300 causes the camera 210 to image in each imaging point set on the surface to be inspected of the workpiece 50, while moving the robot hand 201 gripping the workpiece 50, along a movement route including a plurality of imaging points set on the surface to be inspected so that the surface to be inspected of the workpiece 50 is entirely covered by a plurality of images imaged by the camera 210.”), (Note: See Figures 4A-4E and 7 of Oota wherein the imaging points (interpreted to be a working point) are located on the edges of the workpiece.), wherein the working point is kept at a predetermined working offset from an ancillary tool associated with an edge-following system ([0038] via “In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. … In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”); and cause movement of the multi-axis robot to manipulate the object via the handling tool while maintaining the predetermined working offset continuously between the ancillary tool and the working point ([0038] via “In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. … Thus, the accuracy of the flaw inspection can be improved by, in addition to the imaging in which the imaging region including the imaging point is perpendicular to the optical axis of the camera 210 (and the illumination light of the illumination 220), for example, as shown in FIG. 4C, adjusting the orientation of the workpiece 50 gripped by the robot hand 201, by the operation of the robot hand 201, for example, as shown in FIG. 4D or FIG. 4E, so that, in the same imaging point, the imaging region including the imaging point has an angle that is not perpendicular to the optical axis of the camera 210 and the illumination light of the illumination 220. In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”), ([0041] via “The movement operation control unit 330 moves the robot hand 201 on the basis of the movement route of the robot hand 201 calculated by the imaging position information setting unit 310. Thereby, the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is controlled so that all imaging points included in the imaging position information set by the imaging position information setting unit 310 are covered by the imaging points where imaging is performed by the camera 210.”). Oota is silent on to determine one or more dimensional attributes associated with the object; cause movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object; wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side. However, Diankov teaches to determine one or more dimensional attributes associated with the object ([0041] via “In some embodiments, the destination crossing sensor 316 can be used to measure a height of the target object 112 during transfer. … The robotic system 100 can compare the gripper height 322 to a crossing reference height 324 (e.g., a known vertical position of the destination crossing sensor 316 and/or a reference line/plane thereof) to calculate an object height 320 of the target object 112 that is being transferred.”); cause movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330. Also, the robotic system 100 can derive the scanning maneuver 404 for rotating the target object 112 to present multiple surfaces thereof to the scanning sensor 330.”), ([0056] via “Also, as an illustrative example, the robotic system 100 can derive scanning maneuver 414 for moving the recognized object along the x-axis and/or the y-axis and/or for rotating the object about the z-axis.”), (Note: See Figures 4A-4D of Diankov as well.); wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side ([0024] via “In some embodiments, the task can include manipulation (e.g., moving and/or reorienting) of a target object 112 (e.g., one of the packages, boxes, cases, cages, pallets, etc. corresponding to the executing task) from a start location 114 to a task location 116.”), ([0053] via “The robotic system 100 can further derive the scanning maneuver 414 based on the estimates of the corner and/or the end portion locations. For example, the robotic system 100 can derive the scanning maneuver 414 for horizontally/vertically displacing the target object 112 to present multiple corners and/or end portions thereof to the scanning sensor 330.”), (Note: See Figures 4A-4D and 6A-6F of Diankov as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Diankov wherein the at least one processor is configured to: determine one or more dimensional attributes associated with the object; cause movement of the multi-axis robot to manipulate the object via the handling tool such that the working point is configured to move along the edge of the object from a first location on a first side of the edge of the object to a second location along a second side of the edge of the object; wherein the edge of the object comprises surfaces along a plurality of sides connected with corners, and wherein the plurality of sides include the first side and the second side. Doing so allows the system to accurately recognize all dimensions and surfaces of the object for further object processing, as stated by Diankov ([0054] via “The robotic system 100 can operate the scanning sensor 330 based on placing the end effector 304 at the scanning position 412 and/or based on implementing the scanning maneuver 414. … Thus, based on the scanning position 412 and/or the scanning maneuver 414, the robotic system 100 can present multiple surfaces/portions of the unrecognized objects and increase the likelihood of accurately locating and scanning identifiers on the unrecognized objects.”). 9. Claim(s) 2, 4, 5, 12, 14, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oota et al. (US 20180370027 A1 hereinafter Oota) in view of Diankov et al. (US 20200130963 A1 hereinafter Diankov), and further in view of Shivaram et al. (US 20170024613 A1 hereinafter Shivaram). Regarding Claim 2, modified reference Oota teaches the computer-implemented method of claim 1, wherein the ancillary tool is an image capturing device ([0036] via “The camera 210 is an imaging means of imaging the surface to be inspected of the workpiece 50, and, for example, is composed of an imaging element such as a CCD image sensor and a CMOS image sensor. The camera 220 is supported in a predetermined posture so that the surface to be inspected of the workpiece 50 gripped by the robot hand 201 can be imaged, by a support body 213.”), the computer-implemented method further comprising: capturing, via the image capturing device, image data associated with the edge of the object during execution of the edge-following operation ([0037] via “The control device 300 causes the camera 210 to image in each imaging point set on the surface to be inspected of the workpiece 50, while moving the robot hand 201 gripping the workpiece 50, along a movement route including a plurality of imaging points set on the surface to be inspected so that the surface to be inspected of the workpiece 50 is entirely covered by a plurality of images imaged by the camera 210.”), (Note: See Figures 4A-4E and 7 of Oota wherein the image data comprises the edges of the workpiece.). Oota is silent on generating, based on the image data, a continuous image of the edge of the object from the first location to the second location, including a first corner disposed between the first side and the second side. However, Shivaram teaches generating, based on the image data, a continuous image of the edge of the object from the first location to the second location, including a first corner disposed between the first side and the second side ([0053] via “Reference is now made to FIG. 7, which shows a procedure 700 for generating stitched and composite images of workpieces in each station. In step 710, the procedure 700 computes a region imaged by each camera from each station in the common coordinate system that was determined above. In step 720, a bounding box is computed that contains all the regions. In step 730, the procedure 700 creates two stitched images, one for each station.”), ([0058] via “A graphical user interface (GUI) display with similar depictions is shown in FIGS. 12-15. In FIG. 12, a four-camera arrangement generates four respective images 1210, 1220, 1230 and 1240 of the first workpiece placed on (e.g.) a pick platform. Note that the respective edge 1212, 1222, 1232 and 1242 of the workpiece is scaled differently in each image due to the physical placement of the respective camera relative to the platform and workpiece.”), (Note: See Figures 7 and 12-15 of Shivaram as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shivaram wherein the computer-implemented method further comprises: generating, based on the image data, a continuous image of the edge of the object from the first location to the second location, including a first corner disposed between the first side and the second side. When the field of view of the camera is too small to fully capture the object, creating a continuous image allows a larger desired field of view of the object to be captured, as stated by Shivaram ([0054] via “As used herein, the term “stitched image” or “stitching” relates to a process that combines two or more source images into one composite result image. The process is useful when a camera field of view is too small to capture the entire desired scene and multiple images are required.”). Regarding Claim 4, modified reference Oota teaches the computer-implemented method of claim 2, wherein the predetermined working offset is defined at least in part by a predetermined focal length of a lens of the image capturing device ([0038] via “As shown in FIG. 4A, the imaging point means a point located on the optical axis of when the imaging is performed by the camera 210, and the imaging region means an imaging range imaged by the camera 210. When a distance from the camera 210 to the imaging point is snort, the imaging region is small (the field of view is small) as in an imaging region 1 shown in FIG. 4B. When the distance from the camera 210 to the imaging point is long, the imaging region is large (the field of view is large) as in an imaging region 2. In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. … In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”). Regarding Claim 5, modified reference Oota teaches the computer-implemented method of claim 2, the edge-following operation further comprising: determining a working orientation of the working point of the object relative to the image capturing device after causing the handling tool to engage the object, wherein the working orientation and working offset are configured to be controlled by the handling tool of the multi-axis robot such that the working point on the edge of the object remains in focus of the image capturing device during the edge-following operation ([0038] via “As shown in FIG. 4A, the imaging point means a point located on the optical axis of when the imaging is performed by the camera 210, and the imaging region means an imaging range imaged by the camera 210. When a distance from the camera 210 to the imaging point is snort, the imaging region is small (the field of view is small) as in an imaging region 1 shown in FIG. 4B. When the distance from the camera 210 to the imaging point is long, the imaging region is large (the field of view is large) as in an imaging region 2. In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, the number of imaging can be increased or decreased by adjusting the distance from the camera 210 to the imaging point, within a range of focus in the imaging point. When the surface to be inspected of the workpiece 50 is imaged by the camera 210, depending on a shape of a flaw formed on the surface to be inspected of the workpiece 50, a plurality of positional relationships of the camera 210 and the illumination 220, and the imaging point of the workpiece 50 need to be set. Thus, the accuracy of the flaw inspection can be improved by, in addition to the imaging in which the imaging region including the imaging point is perpendicular to the optical axis of the camera 210 (and the illumination light of the illumination 220), for example, as shown in FIG. 4C, adjusting the orientation of the workpiece 50 gripped by the robot hand 201, by the operation of the robot hand 201, for example, as shown in FIG. 4D or FIG. 4E, so that, in the same imaging point, the imaging region including the imaging point has an angle that is not perpendicular to the optical axis of the camera 210 and the illumination light of the illumination 220. In this way, when the surface to be inspected of the workpiece 50 is imaged by the camera 210, by specifying the imaging point, the distance from the camera 210 to the imaging point, and the orientation of the workpiece 50 in the imaging point (hereinafter, these are referred to as “imaging position information”), the positional relationship of the surface to be inspected of the workpiece 50 gripped by the robot hand 201, and the optical axis of the camera 210 and the illumination light of the illumination 220 is uniquely determined, and the imaging region of the surface to be inspected imaged by the camera 210 is uniquely determined.”). Regarding Claim 12, modified reference Oota teaches the edge-following system of claim 11, wherein the computer-coded instructions, when executed by the at least one processor, further cause the edge-following system to: capture, via an image capturing device, image data associated with the edge of the object ([0037] via “The control device 300 causes the camera 210 to image in each imaging point set on the surface to be inspected of the workpiece 50, while moving the robot hand 201 gripping the workp
Read full office action

Prosecution Timeline

Aug 05, 2024
Application Filed
Aug 09, 2024
Response after Non-Final Action
Sep 10, 2024
Response after Non-Final Action
Dec 03, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594964
METHOD OF AND SYSTEM FOR GENERATING REFERENCE PATH OF SELF DRIVING CAR (SDC)
2y 5m to grant Granted Apr 07, 2026
Patent 12594137
HARD STOP PROTECTION SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12583101
METHOD FOR OPERATING A MODULAR ROBOT, MODULAR ROBOT, COLLISION AVOIDANCE SYSTEM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 24, 2026
Patent 12576529
ROBOT SIMULATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12564962
ROBOT REMOTE OPERATION CONTROL DEVICE, ROBOT REMOTE OPERATION CONTROL SYSTEM, ROBOT REMOTE OPERATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
88%
With Interview (+18.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 103 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month