Prosecution Insights
Last updated: April 19, 2026
Application No. 18/414,409

METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIUMS FOR POSITIONING TARGET OBJECT

Non-Final OA §101§102§103
Filed
Jan 16, 2024
Examiner
BALI, VIKKRAM
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Zhejiang Huaray Technology Co. Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
510 granted / 626 resolved
+19.5% vs TC avg
Moderate +11% lift
Without
With
+11.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
34 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 626 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a result determination module, an image determination module, a position determination module in claim 19. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e. process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. The four eligible categories of invention include: (1) process which is an act, or a series of acts or steps, (2) machine which is an concrete thing, consisting of parts, or of certain devices and combination of devices, (3) manufacture which is an article produced from raw or prepared materials by giving to these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery, and (4) composition of matter which is all compositions of two or more substances and all composite articles, whether they be the results of chemical union, or of mechanical mixture, or whether they be gases, fluids, powders or solids. MPEP 2106(I). Claim 20 is rejected under 35 U.S.C. 101 as not falling within one of the four statutory categories of invention because the broadest reasonable interpretation of the instant claims in light of the specification encompasses transitory signals. But, transitory signals are not within one of the four statutory categories (i.e. non-statutory subject matter). See MPEP 2106(I). However, claims directed toward a non-transitory computer readable storage medium may qualify as a manufacture and make the claim patent-eligible subject matter. MPEP 2106(I). Therefore, amending the claims to recite a “non-transitory computer-readable storage medium” would resolve this issue. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 10-14 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Visual reconstruction and localization based robust robotic 6-Dof grasping in the wild, by Liang. With respect to claim 1, Liang discloses A method for positioning a target object, comprising: determining an identification result by processing an image based on an identification model, wherein the identification result includes a first position of each of at least one target object in a first coordinate system; determining, from the image, a target image of each of the at least one target object based on the first position of each of the at least one target object in the first coordinate system; and determining, based on a first reference image and the target image of each of the at least one target object, a second position of each of the at least one target object in a second coordinate system, wherein the second position is configured to determine operation parameters of an operating device, (see section II, wherein …goal is to achieve reliable and autonomous target observation and 6-DoF grasping in an unstructured environment. It covers the complete pipeline of the robotic grasping system including hand-eye calibration, environment perception, pose estimation and target grasping (see Fig. 1)…; also Algorithm 1 in section II. E, steps 4, for Yolo v4, for identifying the objects using various features and step 23 for computing the final position of an object that is to be grasped by the robot as shown in figure 5), as claimed. With respect to claim 2, Liang further discloses determining a first feature of the target image of each of the at least one target image; and determining, based on a similarity between the first feature of the target image of each of the at least one target object and a second feature, an operating order in which the operating device works on the at least one target object, wherein the second feature corresponds to a second reference image, (see Algorithm 1 in section II.E, step 7 for computing feature matching for the object, and section II.E ends with, …operate the manipulator to complete the grasping task), as claimed. With respect to claim 3, Liang further discloses wherein the first feature is obtained based on the target image through a feature extraction model, and the feature extraction model is a machine learning model, (see Algorithm 1, and section II E, Page 72458 right hand column, wherein …In order to select the feature point detection scheme adaptively, we first use the pre-trained YOLO v4 [21] target detector to get the position of the target in the image, and then extract the features by SIFT method), as claimed. With respect to claim 4, Liang further discloses wherein for each of the at least one target object, a representation parameter of the first position of the target object includes a direction parameter of an object frame where the target object is located, (see Algorithm 1, step 4 applying YOLO v4, to detect the bounding box for the objects; and section II E, Page 72458 right hand column, wherein …In order to select the feature point detection scheme adaptively, we first use the pre-trained YOLO v4 [21] target detector to get the position of the target in the image, and then extract the features by SIFT method), as claimed. With respect to claim 5, Liang further discloses wherein the representation parameters includes: a plurality of position parameters of a plurality of key points of the object frame, (and section II E, Page 72458 right hand column, wherein …In order to select the feature point detection scheme adaptively, we first use the pre-trained YOLO v4 [21] target detector to get the position of the target in the image, and then extract the features by SIFT method), as claimed. Claims 10-14 are rejected for the same reasons as set forth in the rejection for claims 1-5, because claims 10-14 are claiming subject matter of similar scope as claimed in claims 1-5. Claims 19 and 20 are rejected for the same reasons as set forth in the rejection for claim 1, because claims 19 and 20 are claiming subject matter of similar scope as claimed in claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-9 and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Visual reconstruction and localization based robust robotic 6-Dof grasping in the wild, by Liang in view of Taamazyan et al (US Pub. 2022/0405506). With respect to claim 6, Liang discloses all the elements as claimed and rejected in claim 5 above. However, Liang fails to explicitly disclose wherein the identification model is obtained by a training process, labels in the training process include a sample direction parameter of a sample object frame where each of at least one sample object is located and a plurality of sample position parameters of a plurality of sample key points of the sample object frame; and a loss function includes a first loss item and a second loss item, wherein the first loss item is constructed based on the sample direction parameter, and the second loss item is constructed based on the plurality of sample position parameters by a Wing Loss function, as claimed. Taamazyan in the same field teaches the identification model is obtained by a training process, labels in the training process include a sample direction parameter of a sample object frame where each of at least one sample object is located and a plurality of sample position parameters of a plurality of sample key points of the sample object frame; and a loss function includes a first loss item and a second loss item, wherein the first loss item is constructed based on the sample direction parameter, and the second loss item is constructed based on the plurality of sample position parameters by a Wing Loss function, (see paragraph 0117, wherein … a processing pipeline may include receiving images captured by sensor devices (e.g., master cameras 10 and support cameras 30) and outputting control commands for controlling a robot arm, where the processing pipeline is trained, in an end-to-end manner, based on training data that includes sensor data as input and commands for controlling the robot arm (e.g., a destination pose for the end effector 26 of the robotic arm 24) as the labels for the input training data…; and paragraph 0191, wherein … where the modification may include flattening the output of the neural network before supplying the output to the loss function used to train the disparity neural network, such that the loss function accounts identifies and detects disparities along both the x-axis and the y-axis. In some embodiments, an optical flow neural network is trained and/or retrained to operate on the given types of input data…), as claimed. It would have been obvious to one ordinary skilled in the art at the effective date of invention to combine the two references as they are analogous because they are solving similar problem of grasping objects using robotic arm using image analysis. The teaching of Taamazyan to train a model in order to get the position of the object can be incorporated in to Liang as suggested (see page 72454 right hand column wherein …Inspiration from the strategy of neural network training), for suggestion, and modifying the system yields picking an object from plurality of objects using a robotic arm (see Taamazyan paragraph 0004), for motivation. With respect to claim 7, Liang and Taamazyan for the same reasons of combination discloses wherein the identification model includes a feature extraction layer, a feature fusion layer, and an output layer; wherein the feature extraction layer includes a plurality of convolutional layers connected in series, and the plurality of convolutional layers output a plurality of graph features; the feature fusion layer fuses the plurality of graph features to determine a third feature of the image; and the output layer processes the third feature to determine the identification result, (see Taamazyan figure 2, numerical 15 feature extractor; and paragraph 0062, wherein …FIG. 2 is a more detailed block diagram of the vision module 7 according to one embodiment. The vision module 7 may include a feature extractor 15 and a predictor 19 (e.g., a classical computer vision prediction algorithm or a trained statistical model) configured to compute a prediction output 21 (e.g., a statistical prediction) regarding one or more objects 2 in the scene based on the output of the feature extractor…; and paragprah 0166, wherein …one embodiment, the deep learning network 412 is configured to generate feature maps based on the input images 410, and employ a region proposal network (RPN) to propose regions of interest from the generated feature maps. The proposals by the CNN backbone may be provided to a box head 414 for performing classification and bounding box regression. In one embodiment, the classification outputs a class label 416 for each of the object instances in the input images 410, and the bounding box regression predicts bounding boxes 418 for the classified objects…), as claimed. With respect to claim 8, Liang and Taamazyan for the same reasons of combination discloses wherein the determining, based on the first reference image and the target image of each of the at least one target object, the second position of each of the at least one target object in the second coordinate system includes: for each of the at least one target object, determining a transformation parameter by processing, based on a transformation model, the first reference image and the target image of the target object; and converting, based on the transformation parameter, a third position of the target object in a third coordinate system into the second position, wherein the third coordinate system is determined based on the target image of the target object, (see Taamazyan paragraph 0100, wherein … A pose estimator 100 according to various embodiments of the present disclosure is configured to compute or estimate poses of the objects 22 based on information captured by the main camera 10 and the support cameras 30 …Types of electronic circuits may include a central processing unit (CPU), a graphics processing unit (GPU), an artificial intelligence (AI) accelerator (e.g., a vector processor, which may include vector arithmetic logic units configured efficiently perform operations common to neural networks, such dot products and softmax); and see Taamazyan paragraph 0169, wherein …block 430, the matching algorithm identifies features of a first object instance in a first segmentation mask. The identified features for the first object instance may include a shape of the region of the object instance, a feature vector in the region, and/or keypoint predictions in the region. The shape of the region for the first object instance may be represented via a set of points sampled along the contours of the region. Where a feature vector in the region is used as the feature descriptor, the feature vector may be an average deep learning feature vector extracted via a convolutional neural network..), as claimed. With respect to claim 9, Liang and Taamazyan for the same reasons of combination discloses wherein the transformation model includes an encoding layer and a conversion layer, wherein the encoding layer processes the target image to determine a first encoding vector, and processes the first reference image to determine a second coding vector; and the conversion layer processes the first encoding vector and the second encoding vector to determine the transformation parameter, (see Taamazyan paragraph 0169, wherein …block 430, the matching algorithm identifies features of a first object instance in a first segmentation mask. The identified features for the first object instance may include a shape of the region of the object instance, a feature vector in the region, and/or keypoint predictions in the region. The shape of the region for the first object instance may be represented via a set of points sampled along the contours of the region. Where a feature vector in the region is used as the feature descriptor, the feature vector may be an average deep learning feature vector extracted via a convolutional neural network), as claimed. Claims 15-18 are rejected for the same reasons as set forth in the rejection for claims 6-9, because claims 15-18 are claiming subject matter of similar scope as claimed in claims 6-9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKKRAM BALI whose telephone number is (571)272-7415. The examiner can normally be reached Monday-Friday 7:00AM-3:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VIKKRAM BALI/Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Jan 16, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602810
TIRE-SIZE IDENTIFICATION METHOD, TIRE-SIZE IDENTIFICATION SYSTEM AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12586208
APPARATUS AND METHOD FOR OPERATING A DENTAL APPLIANCE
2y 5m to grant Granted Mar 24, 2026
Patent 12567248
A CROP SCANNING SYSTEM, PARTS THEREOF, AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 03, 2026
Patent 12561937
METHOD, COMPUTER PROGRAM, PROFILE IDENTIFICATION DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12537917
ADAPTATION OF THE RADIO CONNECTION BETWEEN A MOBILE DEVICE AND A BASE STATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 626 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month