Prosecution Insights
Last updated: April 19, 2026
Application No. 18/746,492

OPTIMIZED METHOD FOR IDENTIFICATING ROBOT POURING REGIONS BASED ON HIERARCHICAL PROCESSING AND CONNECTIVITY MAXIMIZATION AND SYSTEM THEREOF

Non-Final OA §103§112
Filed
Jun 18, 2024
Examiner
MOLNAR, SIDNEY LEIGH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Shandong University
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
7 granted / 13 resolved
+1.8% vs TC avg
Strong +86% interview lift
Without
With
+85.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
31 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to because Figures 5-7 are relatively unclear. Applicant’s specification describes the subject matter of Figures 5-7 as being somehow related to Figures 3 and 4 without necessarily showing how the Figures are correlated. Figure 2 partially relates the content of Figure 3 with the content of Figure 4 as producing the contents of Figures 5-7 in the “Fine Identification” box. Examiner recommends including some of the overlapping variables of Figures 3 and 4 in at least Figure 5 to clearly demonstrate which face of the triangular prism is to be considered when viewing Figures 5-7. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The following title is suggested: The title currently includes identificating which is not a word recognized by the English language. Examiner ascertains this is likely a translation issue and recommends changing the title to instead read identifying. Claim Objections Claims 1-8 are objected to because of the following informalities: Claim 1 recites “…based on environmental information image…” in lines 3-4. Examiner recommends including the article an in order to effectively introduce the environmental information image as a single entity. As such, Examiner recommends correcting the limitation to instead read “…based on an environmental information image…”. Claim 8 is objected to for a similar limitation. Claim 1 further recites “…and, performing a fine identification by controlling a mechanical arm of the mobile robot to move to an information collection position according to the trajectory plan and the path plan, to obtain a pouring region…” in lines 6-9. Examiner ascertains that the commas within this limitation are grammatically insufficient to convey the appropriate separations of this limitation. As such, it is recommended to correct the limitation to instead read “…and[[,]] performing a fine identification by controlling a mechanical arm of the mobile robot to move to an information collection position according to the trajectory plan and the path plan[[,]] to obtain a pouring region…”. Such a correction would lead one of ordinary skill in the art to better understand that the movement of the mechanical arm is for the purpose of obtaining the pouring region. Claim 8 is objected to for a similar limitation. Claim 1 further recites “…generating corresponding connectable domain…” in line 11. Examiner recommends including the article a in order to effectively introduce the corresponding connectable domain as a single entity. As such, Examiner recommends correcting the limitation to instead read “…generating a corresponding connectable domain…”. Claim 8 is objected to for a similar limitation. Claim 2 recites the limitation “…wherein the performing the preliminary location of the target container based on the environmental information image to obtain the spatial region of the target container, comprising…” in lines 1-3. Since the preamble reiterates a method step which is described in claim 1, Examiner recommends merely reciting the method step … which comprises. As a result, the recommended correction is “…wherein performing the preliminary location of the target container based on the environmental information image to obtain the spatial region of the target container comprises…”. This correction more clearly distinguishes that the claim further limits a specific method step by the features which follow “comprises”. Claims 3-7 are objected to for having similar preambles. Claim 5 recites the limitation “…generating the connectable domain of the source container by using a formula: Ω c o n t a i n e r   =   l s × ( l l / 2   + h c o n t a i n e r ) 2 tan ⁡ α .” Examiner ascertains that Ω c o n t a i n e r is representative of the connectable domain, but since the claim does not directly link the numeric variable with the descriptive variable, Examiner recommends including the numeric variable in the introduction to the equation. As such, the corrected limitation would read something like “…generating the connectable domain of the source container Ω c o n t a i n e r by using a formula: Ω c o n t a i n e r   =   l s × ( l l / 2   + h c o n t a i n e r ) 2 tan ⁡ α .” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a rough-identification module” in claim 8; and “a fine-identification module” in claim 8. Although such modules are not explicitly described with pertinent structure, the functions of such modules reflect those steps of the method which are computer-executed programs. As such, Examiner will interpret the rough-identification module and the fine-identification module to be software modules of the processor. In rejecting the limitations, any such software or processor which performs the desired functions will be considered as sufficient structure. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. In addition to the above 112f claim interpretations, Claim 4 recites the limitation “…a characteristic moment method…” in lines 6-7. The disclosure provides little evidence for what such a characteristic moment method is, except briefly relating the limitation to OpenCV (see Page 8 of Applicant’s specification). Examiner will interpret such a moment as being any such method which restricts a bounding box to the optimal edge detection of a detected object, as evidenced by OpenCV “Contour Features” (attached in File). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “An optimized method…” in line 1. The term “optimized” used to describe the method is a relative term which renders the claim indefinite. The term “optimized” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. See MPEP 2173.05(b)(I). Additionally, the term “optimized” in this instance is a subjective term. The applicant has not defined the term in the claim, nor does the specification provide an objective standard for this measure. Therefore, the term “optimized” would be subject to the specific needs of the user and would exercise subjective judgement without restriction, rendering the claim indefinite. See MPEP 2173.05(b)(IV). Examiner will thus interpret the claim to require any such method which performs the limitations which follow in further limiting the method as pertinent in reviewing the prior art. Claim 8 is rejected for having a similar limitation. Claims 2-7 and 9-10 are rejected as being dependent on claim 1 and additionally for having a similar limitation. Claim 1 recites the limitation “…performing a preliminary location of a target container…” in lines 3. Examiner is unclear what it means to perform a location. After consideration of the specification, Examiner finds that on Page 6 Applicant describes “The present stage of rough identification is to perform, according to working characteristics of a mobile robot, an identification and a positioning of a target container through a vision module when a distance from a target region is relatively far…”. Thus, Examiner will instead interpret the claim to read “…performing an identification of a preliminary location of a target container…”. Such an interpretation would further provide support for the limitation of claim 2 which reads “…performing the identification of the target container…”. In the absence of the correction to claim 1, there would be insufficient antecedent basis for the identification in claim 2, as no such identification of the target container has been performed in the originally drafted claim. The preamble of claim 2 which recites the method step which is rejected in claim 1 should additionally be amended to reflect the changes made to claim 1. Claim 8 is rejected as having a similar limitation to that of claim 1. Claims 2-7 and 9-10 are rejected as being dependent on claim 1. Claim 6 recites the limitation “…making the pouring point m corresponding to m ' - corresponding to a maximum value of the weight is the pouring point and the pouring direction of the source container…” in lines 12-14. It is first unclear what it means with “m corresponding to m ' - corresponding to a maximum value of the weight”. Are each of m, m ' - , and a maximum value of the weight different variables? Is m ' - the maximum value of the weight, or is m ' - another value entirely? The claim does not define m ' - in a manner which would make it clear what the variable represents. Then, it is unclear what is the pouring point and what is the pouring direction of the source container, and how making the variables introduced in the beginning of the limitation leads to defining the pouring point and the pouring direction of the source container. Examiner found support for this limitation on Page 10 of Applicant’s specification, but the support did little to clear up the confusion regarding what it means for the features m, m ' - , and a maximum value of the weight to correspond, and further does not clarify how this corresponding leads to a pouring point and a pouring direction. Examiner ascertains that clarifying the language of the claim to better reflect the relationships of the variables above with respect to the provided drawings/illustrations would be beneficial to clarifying the claim limitation as a whole. Claim 7 is rejected as being dependent on claim 6. Claim 7 recites the limitation “…wherein taking the centroid of the opening of the target container as a starting point, making rays outward at an interval angle to intersect with the edge trajectory of the opening of the target container, and taking the intersections as new starting points, performing a searching by making a direction of the m ' - along the rays to the centroid, and finally…” in lines 5-8. Examiner ascertains that each of these limitations which is separated by a comma is a unique step, but as currently written with filler words such as wherein and and it is not clear what the method is directed to. Additionally, Examiner does not know if a searching is a searching process or is merely a search. Examiner best understands the limitations to instead read: “wherein the simplification of the search process further comprises taking the centroid of the opening of the target container as a starting point, making rays outward at an interval angle to intersect with the edge trajectory of the opening of the target container, and taking the intersections as new starting points, and additionally, performing the search process by making a direction of the m ' - along the rays to the centroid, and finally…”. Examiner notes wherein the claims have been addressed below, in view of the prior art record, as best understood by the Examiner in light of the 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph rejections provided herein. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over You et al. (CN113031437A; hereinafter “You”, translation attached to file) in view of Sueki et al. (“Optimal Positioning of Ladle in Automatic Pouring Machine in Consideration of Pouring Liquid Accurately into Sprue”, 2017). Regarding claim 1, You teaches an optimized method for identifying robot pouring regions based on hierarchical processing and connectivity maximization (“This invention relates to the field of water-pouring service robot control methods, and in particular to a water-pouring service robot control method based on dynamic model reinforcement learning” [0001]. Thus, an optimized method for performing robot pouring is disclosed.), comprising the following steps: performing a preliminary location of a target container based on environmental information image to obtain a spatial region of the target container (In [0009-0010], the system obtains a depth image to acquire information in 3D space, i.e., obtains environmental information image, and this information is provided to a neural network to determine a preliminary location of a target container based on state information, i.e., spatial region of target container.); generating a trajectory plan and a path plan of a mobile chassis of a mobile robot according to the spatial region of the target container (Following the identification of the container and spatial region, the information is input to a simulation of the current environment and an action policy, i.e., trajectory plan and path plan according to the spatial region of the target container, are determined (see [0011-0013]).); and, performing a fine identification by controlling a mechanical arm of the mobile robot to move to an information collection position according to the trajectory plan and the path plan, to obtain a pouring region (In step 5 ([0013] and [0044-0045]), the robot arm is controlled according to the action control vector and such an action vector is continually updated at each action position, i.e., moved to an information collection position according to the trajectory plan, such that the pouring task is complete, i.e., the pouring region is fully obtained.); wherein, a process of the fine identification specifically comprises: acquiring image information of the target container and a source container (Paragraph [0015] describes acquiring image information of position of both target and source containers.); generating corresponding connectable domain based on the image information of the target container and the source container (The connectable domain will be the state vector s which determines the combined information to correlate the source container and target container (see also [0015]).)… However, You does not explicitly teach …searching, by using a method of connectivity maximization, a pouring point and a pouring direction of the source container which make an intersection of the connectable domain of the source container and the target container maximum in a searching plane, further obtaining positions of the pouring point on the source container, and then obtaining the pouring region according to a combination of the positions of the pouring points. Sueki, pertinent to the problem at hand, teaches …searching, by using a method of connectivity maximization, a pouring point and a pouring direction of the source container which make an intersection of the connectable domain of the source container and the target container maximum in a searching plane (“In this study, we propose the optimal positioning of the ladle in consideration of pouring the liquid accurately into the sprue without sloshing. In this approach, the horizontal and vertical position of pouring mouth in the ladle is determined before pouring, and held while pouring. Therefore, the sloshing is not excited while pouring. The ladle's position is optimized from the pouring flow rate pattern and starting angle of tilting the ladle” (Page 897). Thus, there is an optimal pouring point and pouring direction, i.e., horizontal and vertical position of ladle and the angle of tilt of the ladle, of the source container (ladle) which determines the flow rate of liquid into the sprue, i.e., target container. Although “optimal” has not been defined as “maximum intersection of connectable domain”, in view of Fig. 5(b) one of ordinary skill in the art would best understand the optimal outflow position as a maximum intersection of the connectable domains of pouring outflow area and sprue opening.), further obtaining positions of the pouring point on the source container, and then obtaining the pouring region according to a combination of the positions of the pouring points (The positions of the pouring point on the source container is best provided by examples in Fig. 6-8, and the pouring region according to the combination of positions of pouring points is best provided as the cross-sectional outflow area of Fig. 5(b) which would be calculated for each of the scenario positions of Fig. 6-8.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the pouring process of You to include the connectivity maximization methods of Sueki with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because the optimal pouring of Sueki minimizes spills and sloshing that would result from an imprecise pour of a liquid into a target container (Sueki, Page 897). Regarding claim 8, You teaches an optimized system for identifying robot pouring regions based on hierarchical processing and connectivity maximization (There is a system for performing the optimized robot control method for pouring (see [0001] for description of robot control method).), comprising: a rough-identification module, being configured to perform a preliminary location of a target container based on environmental information image to obtain a spatial region of the target container (In [0009-0010], the system obtains a depth image to acquire information in 3D space, i.e., obtains environmental information image, and this information is provided to a neural network to determine a preliminary location of a target container based on state information, i.e., spatial region of target container.); and being configured to generate a trajectory plan and a path plan of a mobile chassis of a mobile robot according to the spatial region of the target container (Following the identification of the container and spatial region, the information is input to a simulation of the current environment and an action policy, i.e., trajectory plan and path plan according to the spatial region of the target container, are determined (see [0011-0013]).); and a fine-identification module, being configured to perform a fine identification by controlling a mechanical arm of the mobile robot to move to an information collection position according to the trajectory plan and the path plan, to obtain a pouring region (In step 5 ([0013] and [0044-0045]), the robot arm is controlled according to the action control vector and such an action vector is continually updated at each action position, i.e., moved to an information collection position according to the trajectory plan, such that the pouring task is complete, i.e., the pouring region is fully obtained.); wherein, a process of the fine identification specifically comprises: acquiring image information of the target container and a source container (Paragraph [0015] describes acquiring image information of position of both target and source containers.); generating corresponding connectable domain based on the image information of the target container and the source container (The connectable domain will be the state vector s which determines the combined information to correlate the source container and target container (see also [0015]).)… Although You is not explicit in determining modules for performing such control functions in the system, these modules would be obvious to one of ordinary skill in the art as the control method is obviously implemented on a robotic controller to perform the control method. Such a controller is obvious to one of ordinary skill in the art. However, You does not explicitly teach … searching, by using a method of connectivity maximization, a pouring point of the source container which makes an intersection of the connectable domains of the source container and the target container maximum in a searching plane, and a pouring direction, further obtaining positions of the pouring point on the source container, and then obtaining the pouring region according to a combination of the positions of the pouring point. Sueki, pertinent to the problem at hand, teaches …searching, by using a method of connectivity maximization, a pouring point of the source container which makes an intersection of the connectable domains of the source container and the target container maximum in a searching plane (“In this study, we propose the optimal positioning of the ladle in consideration of pouring the liquid accurately into the sprue without sloshing. In this approach, the horizontal and vertical position of pouring mouth in the ladle is determined before pouring, and held while pouring. Therefore, the sloshing is not excited while pouring. The ladle's position is optimized from the pouring flow rate pattern and starting angle of tilting the ladle” (Page 897). Thus, there is an optimal pouring point and pouring direction, i.e., horizontal and vertical position of ladle and the angle of tilt of the ladle, of the source container (ladle) which determines the flow rate of liquid into the sprue, i.e., target container. Although “optimal” has not been defined as “maximum intersection of connectable domain”, in view of Fig. 5(b) one of ordinary skill in the art would best understand the optimal outflow position as a maximum intersection of the connectable domains of pouring outflow area and sprue opening.), and a pouring direction, further obtaining positions of the pouring point on the source container, and then obtaining the pouring region according to a combination of the positions of the pouring point (The positions of the pouring point on the source container is best provided by examples in Fig. 6-8, and the pouring region according to the combination of positions of pouring points is best provided as the cross-sectional outflow area of Fig. 5(b) which would be calculated for each of the scenario positions of Fig. 6-8.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the pouring process of You to include the connectivity maximization methods of Sueki with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because the optimal pouring of Sueki minimizes spills and sloshing that would result from an imprecise pour of a liquid into a target container (Sueki, Page 897). Regarding claim 9, You as modified by Sueki teaches …the optimized method for identifying robot pouring regions based on hierarchical processing and connectivity maximization according to claim 1. Although You is not explicit in determining the method as being stored on a non-transitory computer-readable storage medium in the form of a computer program which is to be executed by a processor, it would be obvious to one of ordinary skill in the art that the control method which is disclosed would be applied to the robot via a controller which is inclusive of such a non-transitory computer-readable storage medium which stores command instructions in the form of a computer program to execute said method. Regarding claim 10, You as modified by Sueki teaches … implementing steps of the optimized method for identifying robot pouring regions based on hierarchical processing and connectivity maximization according to claim 1. Although You is not explicit in determining the method as being implemented as a computer program stored on a memory and executed by a processor as part of “computer equipment”, it would be obvious to one of ordinary skill in the art that such a control method would be applied to the robot via a robot controller which is computer equipment comprising such a memory and processor that store and execute a program pertaining to the step by step instruction of the method which is taught. Claims 2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over You in view of Sueki, further in view of Wu et al. (“A Real-Time Cup-Detection Method Based on YOLOv3 for Inventory Management”, 2022; hereinafter “Wu”), and further in view of Blue et al. (“Edge detection based boundary box construction algorithm for improving precision of object detection in YOLOv3”, 2019; hereinafter “Blue”). Regarding claim 2, You as modified by Sueki teaches the optimized method according to claim 1, with You further teaching …wherein the performing the preliminary location of the target container based on the environmental information image to obtain the spatial region of the target container, comprising: …calculating to obtain world coordinates of the target container combined with a target distance obtained by a depth camera (“Since the camera is positioned in a third-person perspective, after perceiving the spatial information of the scene, it is necessary to convert the coordinates and angles of the source and target containers into a coordinate system with the robotic arm base as the origin” [0095]. Thus, the world coordinates of the target container are combined with the depth image results to determine a coordinate centered at the base of the robot in determining the container positions.). However, You as modified does not explicitly teach …wherein the performing the preliminary location of the target container based on the environmental information image to obtain the spatial region of the target container, comprising: performing the identification of the target container by using a YOLO algorithm, obtaining a region of the target container in the environmental information image and marking an identification frame; and performing gray processing and binary processing on the environmental information image according to a position of the identified region of the target container in the environmental information image, to obtain centroid coordinates of the region of the target container, and further obtaining center coordinates of an upper edge of the target container… Wu, pertinent to the problem at hand, teaches …wherein the performing the preliminary location of the target container based on the environmental information image to obtain the spatial region of the target container, comprising: performing the identification of the target container by using a YOLO algorithm, obtaining a region of the target container in the environmental information image and marking an identification frame (As shown in Fig. 15, YOLOv3 algorithms of both previous research and current research are compared with each algorithm obtaining a region of a cup, i.e., target container, in the environmental information image and marks each cup with the corresponding bounding box, i.e., identification frame.); and performing (As shown in Fig. 2, the bounding box and associated coordinates are adjusted based on a centroid position of the detected object and the object boundary according to the YOLOv3 detection method. Edge detections in this method are exemplified as binarization methods, i.e., binary processing, according to the collected images (see Fig.6). Specifically, they used the SSD algorithm which is inclusive of binary processing.)… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the detections of You to include the image processing techniques of Wu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification in order to reduce error rates of the image detection in determining a surface of the target and source container. However, You as modified by Sueki and Wu does not include …performing gray processing … on the environmental information image … Blue, pertinent to the problem at hand, teaches … performing gray processing … on the environmental information image (Fig. 4 demonstrates a method of gray processing on an image to refine the boundaries of the object which has been detected by the YOLOv3 algorithm.)… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to additionally modify the image detection methods of You to include gray processing as taught by Blue with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because the use of gray processing in the method of Blue is exemplified as reducing error in the edge detection to determine the bounding box on the external periphery of a detected object within the image. Regarding claim 4, You as modified by Sueki teaches the optimized method according to claim 1, with Sueki further teaching …wherein the generating the connectable domain of the target container based on the image information of the target container, comprising: …using an area surrounded by the edge trajectory of the opening of the target container as a bottom and a parameter hreceiver as a high to generate a region as the connectable domain of the target container (As shown in Fig. 5-8, the edge of the sprue is considered as a bottom with Sz[m] as a high which generates a region of the detectable domain according to the ladle pouring positions.); wherein, the parameter hreceiver is an upward projection length of a plane of the opening of the target container (Sz[m] is the upward projection length of a plane of the opening of the target container which extends to the height of the pouring mouth.). However, You as modified by Sueki does not explicitly teach …performing noise reduction processing, gray processing, and binary processing on the image of the target container, obtaining an edge trajectory and a centroid of an opening of the target container by using a contour detection method and a characteristic moment method… Wu, pertinent to the problem at hand, teaches …performing … binary processing on the image of the target container (Section 3.3. discusses the use of SSD algorithm to detect edge parameters of the cup objects which is inclusive of binary processing.), obtaining an edge trajectory and a centroid of an opening of the target container by using a contour detection method and a characteristic moment method (Fig. 15 and Fig. 2 demonstrates the use of a bounding box restricted to an edge trajectory around a centroid at the opening of the cup through the contour, i.e., edge, detection method and characteristic moments of such features.)… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the image processing of You to include the binary processing and edge detection methods of Wu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification in order to reduce error rates of the image detection in determining a surface of the target and source container. However, You as modified by Sueki and Wu does not explicitly teach … performing noise reduction processing, gray processing… on the image of the target container… Blue, pertinent to the problem at hand, teaches …performing noise reduction processing, gray processing… on the image of the target container (The YOLOv3 edge detection in this case utilizes grayscale and image blurring, i.e., gray and noise reduction processing, to determine refined edges of the detected object.)… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to additionally modify the image detection methods of You to include noise reduction processing and gray processing as taught by Blue with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because the use of gray processing in the method of Blue is exemplified as reducing error in the edge detection to determine the bounding box on the external periphery of a detected object within the image. Claims 3 is rejected under 35 U.S.C. 103 as being unpatentable over You in view of Sueki, further in view of Wu, further in view of Blue, further in view of Chief Architect (“Increasing the Field of View for a 3D Camera View”, 2021), and further in view of Santos et al. (“MB-RRT: An Inverse Kinematics Solver of Reduced Dimension”, 2021; hereinafter “Santos”). Regarding claim 3, You as modified by Sueki teaches the optimized method according to claim 1, with You further teaching …wherein the generating the trajectory plan and the path plan of the mobile chassis of the mobile robot according to the spatial region of the target container, and controlling the mechanical arm of the mobile robot to move to the information collection position according to the trajectory plan and the path plan, comprising: …controlling the mobile chassis to move to a vicinity of a target position according to the planned motion trajectory (Action policies control the robot to move to the corresponding action locations to complete the pouring task (see [0092-0093]).), … and moving a vision module on the mechanical arm to the information collection position (“The installation position and angle of the depth camera do not need to be strictly fixed. It can be mounted on the robot or mounted on a fixed bracket in the third-person view. Just adjust the camera so that all relevant objects are included in the field of view” [0065]. Thus, when moving the arm, the camera, i.e., vision module, is also adjusted when positioned on the arm so that the relevant objects are included in the field of view, i.e., an information collection position.). However, You as modified does not explicitly teach …obtaining the information collection position for a stage of the fine identification by increasing a hcamera on the z-axis of the world coordinates of the target container; wherein, the hcamera refers to a distance that a camera needs to keep from an object to be photographed when taking pictures to obtain information; and planning a motion trajectory of the mobile chassis by using a rapidly-exploring random tree (RRT) algorithm after obtaining the information collection position, … then generating, by using an inverse kinematics method, joint parameters of the mechanical arm according to the information collection position generated in a stage of rough identification… Regarding the limitations …obtaining the information collection position for a stage of the fine identification by increasing a hcamera on the z-axis of the world coordinates of the target container; wherein, the hcamera refers to a distance that a camera needs to keep from an object to be photographed when taking pictures to obtain information…, You (as cited additionally above) teaches “The installation position and angle of the depth camera do not need to be strictly fixed. It can be mounted on the robot or mounted on a fixed bracket in the third-person view. Just adjust the camera so that all relevant objects are included in the field of view” [0065]. Chief Architect, pertinent to the problem at hand, teaches adjustments to a camera to expand a field of view as desired by a user, inclusive of height adjustments (see “To Use the Camera Specification Dialogue to Adjust the View” and “To Modify the Camera in Plan View”). By combination of these teachings, it would therefore be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of You to include adjustments to the height of the camera as taught by Chief Architect with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because by adjusting the height of the camera, the field of view will increase and thus maintain the object(s) in view of the camera. However, You as modified still does not explicitly teach … planning a motion trajectory of the mobile chassis by using a rapidly-exploring random tree (RRT) algorithm after obtaining the information collection position, … then generating, by using an inverse kinematics method, joint parameters of the mechanical arm according to the information collection position generated in a stage of rough identification… You merely teaches a general action policy and control strategy. Given that the action policy of You provides goal positions for the pouring to be executed, Santos, pertinent to the problem at hand, teaches … planning a motion trajectory of the mobile chassis by using a rapidly-exploring random tree (RRT) algorithm after obtaining the information collection position, … then generating, by using an inverse kinematics method, joint parameters of the mechanical arm according to the information collection position generated in a stage of rough identification (“…we propose in this paper a probabilistic method of inverse kinematics for manipulators based on RRT, named Workspace-RRT. The proposed approach applies the probabilistic search directly on the workspace of a serial robot manipulator” (Page 148559). “A second method is proposed in this paper by incorporating a new probability model and a new metric for the nearest node to the Workspace-RRT. Named Manipulator-Based Rapidly Random Tree (MB-RRT), this approach was developed focused on aspects such as fast convergence, prevention of local minima and singularities, simplicity, and probabilistic completeness” (Page 148559). Thus, there is an inverse kinematics based RRT solution which determines joint parameters to reach a goal position in the path planning problem. Examples of the solutions can be seen in Fig. 5-15.)… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the action policies of You to include RRT and inverse kinematics methods as taught by Santos with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because the inverse kinematic and RRT methods as taught by Santos applies the search directly to the workspace of the robot manipulator and additionally determines complete and efficient solutions for the collision-free path planning problem (Page 148559). Allowable Subject Matter Claims 5-7 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 5, You as modified by Sueki teaches the optimized method according to claim 1. Wu would further teach …performing …binary processing on the image of the source container (Section 3.3. discusses the use of SSD algorithm to detect edge parameters of the cup objects which is inclusive of binary processing.), obtaining an edge trajectory and a centroid of an opening of the source container by using a contour detection method and a characteristic moment method (Fig. 15 and Fig. 2 demonstrates the use of a bounding box restricted to an edge trajectory around a centroid at the opening of the cup through the contour, i.e., edge, detection method and characteristic moments of such features.), and calculating a long-axis length and a short-axis length of a contour of the opening according to a circumscribed rectangle of the contour (As shown in figure 2, the bounding box which detects the edges of the contour opening is provided a width bw and height bh which are the long-axis length and short-axis length according to the circumscribed rectangle of the contour which is the bounding box.)… Blue would further teach …performing noise reduction processing, gray processing … on the image of the source container (The YOLOv3 edge detection in this case utilizes grayscale and image blurring, i.e., gray and noise reduction processing, to determine refined edges of the detected object.)… **See rejection of claim 4 for appropriate reasons to combine the teachings of Wu and Blue for these limitations. However, none of You, Sueki, Wu, or Blue provides adequate teachings for …simplifying a region where a liquid flows out of the source container into a triangular prism having a base area of a right triangle with an angle of α and a height of ll / 2 + hcontainer , and a height ls according to the calculated long-axis distance ll and short-axis distance ls; wherein, the angle α is set according to states of the fluid and a pouring angular velocity of the source container, hcontainer is the distance from a pouring point m of the source container to a surface of the opening of the target container; and, calculating and generating the connectable domain of the source container by using a formula: Ω c o n t a i n e r   =   l s   ×   ( l l   /   2   +   h c o n t a i n e r ) 2 tan ⁡ α . In the closest art of record, Sueki teaches similar such variables for determining the region where a liquid flows out of the source container (see Figs. 5-8). However, Sueki is absent any mention of a triangular prism having a base area of a right triangle specific to those variables indicated in the claim. Sueki does not explicitly simplify the measurements of the container using any such three dimensional determination of the region where a liquid flows out of the source container when the connectable domain is determined as shown in Fig. 5. Thus, given that none of the other art which was considered in rejecting the claims teaches this feature, upon correction of any relevant 112b rejections cited above, the limitation regarding the triangular prism claim 5 would be considered as allowable. Regarding claim 6, You as modified by Sueki teaches the optimized method according to claim 1, with Sueki further teaching wherein the searching, by using the method of connectivity maximization, the pouring point of the source container which makes the intersection of the connectable domain of the source container and the target container maximum in the searching plane, and the pouring direction, comprising: obtaining a projected plane by projecting the connectable region of the source container onto the searching plane (In Fig. 5, the projected connectable region of the source container onto the searching plane is determined as the cross-sectional area of outflow liquid which may be considered as a projected plane onto the searching plane in which the sprue is located.)… However, the teachings of You as modified by Sueki fall short in determining …dividing the projection plane of the connectable domain of the source container into a plurality of rectangles equally, and calculating and obtaining weight values of the plurality of the rectangles according to parameters of the connectable domain… In an attempt to find art which would adequately teach this corresponding feature, Examiner ascertains Jiang as teaching the closest relevant features. Jiang uses a grid-based scoring system to determine object localization, as is best shown in Figs. 2 and 3. However, the teachings of Jiang are directed towards a general image processing technique. You, which teaches the image processing features of the rejected claim, does not teach the connectable domain and corresponding projection plane. Sueki, which modifies You to teach the connectable domain and corresponding projection plane, does not teach any image processing methods which would render the current limitation as an obvious combination which would be known to one of ordinary skill in the art. Therefore, Examiner cannot ascertain an appropriate reason to combine the teachings of Jiang with the teachings of You and Sueki to obtain the invention of claim 6. Since claim 6 is absent any teaching of a rectangularly divided and weighted projection plane, the claim is additionally absent of any teaching regarding … searching, in the searching plane, for a sum of weights of the projection plane of the connectable domain of the source container contained in an edge trajectory of an opening of the target container, and making the pouring point m corresponding to m ' - corresponding to a maximum value of the weight is the pouring point and the pouring direction of the source container; wherein, a projection of the pouring point m on the surface of the opening of the target container is set as m’, and m ' - is a unit vector formed by extending an opening direction of the opening of the target container from a starting point m’ which is a projection of the pouring point m on the surface of the opening of the target container. Thus, given that none of the other art which was considered in rejecting the claims teaches these features, upon correction of any relevant 112b rejections cited above, the limitation regarding the rectangularly divided and weighted projection plane of claim 6 would be considered as allowable. Claim 7, upon correction of any relevant 112b rejections cited above, would additionally be considered as allowable as it is dependent on claim 6. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Other such documents which were considered in this round of prosecution but were not used in rejection or reasons for allowance are attached in the file. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY L MOLNAR whose telephone number is (571)272-2276. The examiner can normally be reached 9 A.M. to 4 P.M. EST Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jonathan (Wade) Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.L.M./Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Mar 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600039
ROBOT, CONVEYING SYSTEM, AND ROBOT-CONTROLLING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12533807
ROBOTIC APPARATUS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12479098
SURGICAL ROBOTIC SYSTEM WITH ACCESS PORT STORAGE
2y 5m to grant Granted Nov 25, 2025
Patent 12384048
TRANSFER APPARATUS
2y 5m to grant Granted Aug 12, 2025
Patent 12376922
TOOL HEAD POSTURE ADJUSTMENT METHOD, APPARATUS AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+85.7%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month