Prosecution Insights
Last updated: April 19, 2026
Application No. 18/216,221

PLACEMENT LOCATION OBTAINING METHOD, MODEL TRAINING METHOD, AND RELATED DEVICE

Non-Final OA §101§102§103
Filed
Jun 29, 2023
Examiner
GOYEA, OLUSEGUN
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
65%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
465 granted / 712 resolved
+13.3% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
752
Total Applications
across all art units

Statute-Specific Performance

§101
25.5%
-14.5% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 712 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Election/Restrictions Applicant's election with traverse of Invention I (claims 1-6, 10-13 and 18-20) in the reply filed on 09/26/2025 is acknowledged. The traversal is on the ground(s) that in view of the amendments claims 7-9 and 14-17 are drawn to methods and apparatus for inventory or stock management covered by G06Q 10/087. This is not found persuasive because claims 7-19 and 14-17 requires “training a first machine learning model based on a first loss function until a convergence condition is met” before selecting a placement location for a target object. The requirement is still deemed proper and is therefore made FINAL. Status of Claims This non-final office action is responsive to Applicant’s submission filed 09/26/2025. Currently, claims 1-20 are pending. Claims 7 and 14-17 have been amended. No newly added and/or cancelled claims. Claims 7-9 and 14-17 have been withdrawn from further consideration. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an obtaining module”, “a generation module”, and “a selection module” in claim 10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6, 10-13 and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea) without significantly more. The claims recite method and apparatus for inventory location placement. Exemplary claim 10 recites in part, “…obtain first size information and N pieces of second size information, wherein the first size information indicates a size of an unoccupied area in an accommodation space, N is an integer greater than or equal to 1, and the N pieces of second size information indicate sizes of corresponding N first objects; (receiving first size and second size information) generating M candidate placement locations based on the first size information and the N pieces of second size information, wherein one of the M candidate placement locations indicates one placement location of a target object in the unoccupied area, the target object is one object in the N first objects, and M is an integer greater than or equal to 1; (determining candidate placement locations based on received information) generating, based on the first size information and the M candidate placement locations by using a first machine learning model, M first score values that are in a one-to-one correspondence with the M candidate placement locations; and (determining candidate placement location scores) selecting a first placement location from the M candidate placement locations based on the M first score values. (selecting a placement location) The above limitations describe the steps of, 1) acquiring data (first and second size information), 2) processing the received first and second size information (determine candidate placement locations and placement location scores), 3) selecting an outcome (placement location). The above steps describe an inventory location placement process. The above limitations, under their broadest reasonable interpretation, encompass "Certain Methods of Organizing Human Activity" (managing personal behavior or relationships or interactions) enumerated in MPEP 2106.04(a)(2)(II)(C). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or relationships or interactions, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The judicial exception is not integrated into a practical application. The claim recites additional elements in the form of one or more computing elements (obtaining module, generation module and selection module) for implementing the limitations encompassing the abstract idea identified above. The additional elements represent using a computer as a tool to perform the judicial exception as in MPEP 2106.05(f). When considered both individually and as a whole, the additional elements do not integrate the abstract idea into a practical application. The recitation of additional elements is acknowledged as identified above. The discussion with respect to practical application is equally applicable to consideration of whether the additional elements amount to significantly more. The additional elements in the form of one or more computing elements (obtaining module, generation module and selection module), represent using a computer as a tool to perform the judicial exception as in MPEP 2106.05(f). Therefore, there are no meaningful recitations, considered in combination, that transform the judicial exception into a patent eligible application such that the claim amounts to significantly more than the judicial exception itself. Accordingly, claim 10 is directed to a judicial exception (i.e., abstract idea) without significantly more. Claims 1 and 18-20 recite similar limitations as set forth in claim 10, and therefore are rejected based on similar rationale. Dependent claims 2-6 and 11-13 recite limitations directed to the abstract idea, and do not integrate the abstract idea into a practical application nor amount to significantly more. Claims 10-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed controller is directed to “software per se or program per se” that does not fall within at least one of the four categories of patent eligible subject matter. Claim 10 recites, a placement location obtaining apparatus comprising: “an obtaining module”, “a generation module”, and “a selection module”. Applicant’s published specification teaches that “[[a]]n embodiment of this application further provides a training device. FIG. 14 is a schematic diagram of a structure of a training device according to an embodiment of this application. The model training apparatus 1300 described in the embodiment corresponding to FIG. 13 may be deployed on the training device 1400, and the training device 1400 is configured to implement functions of the training device in the embodiments corresponding to FIG. 3 to FIG. 9… In an embodiment, the memory 1432 is a random access memory (RAM), may directly exchange data with the central processing unit 1422, is configured to load the data 1444 and the application program 1442 and/or an operating system 1441 for the central processing unit 1422 for directly running and use, and is usually used as a temporary data storage medium of the operating system or another running program. A program stored in the storage medium 1430 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations for the training device. Further, the central processing unit 1422 may be configured to communicate with the storage medium 1430, and perform, on the training device 1400, the series of instruction operations in the storage medium 1430.” See paragraph 0173. Applicant’s published specification fails to define whether the recited “an obtaining module”, “a generation module”, and “a selection module” are hardware or software. “Software per se” or “Program per se” does not fall within at least one of the four categories of patent eligible subject matter. Claims 11-13 are rejected based on their dependence, directly or indirectly, on claim 10. Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 18 recites a “computer-readable storage medium comprising a program…” Applicant’s published specification teaches that “…FIG. 13 may be deployed on the training device 1400, and the training device 1400 is configured to implement functions of the training device in the embodiments corresponding to FIG. 3 to FIG. 9. For example, a great difference may be generated due to different configurations or performance of the training device 1400, and the training device 1400 may include one or more central processing units (CPUs) 1422 (for example, one or more processors), a memory 1432, and one or more storage media 1430 storing an application program 1442 or data 1444 (for example, one or more mass storage devices). The memory 1432 and the storage medium 1430 may be transient storage or persistent storage. In an embodiment, the memory 1432 is a random access memory (RAM), may directly exchange data with the central processing unit 1422, is configured to load the data 1444 and the application program 1442 and/or an operating system 1441 for the central processing unit 1422 for directly running and use, and is usually used as a temporary data storage medium of the operating system or another running program…” See paragraph 0173. Examiner notes that the claimed “computer-readable storage medium” may include transient-type media (e.g. carrier waves, signals), thus "computer-readable storage medium" causes the claim as a whole be drawn to non-statutory subject matter that would not fall within one of the four statutory categories of invention. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 20 recites “[[A]] computer program product, wherein the computer program product comprises instructions…” Applicant’s published specification teaches that “[[T]]he computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to embodiments of this application are all or partially generated… The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium...” See paragraph 0204. The claimed computer program product comprises only “instructions”. The instructions are interpreted as program/software code. “Software per se” or “Program per se” does not fall within at least one of the four categories of patent eligible subject matter. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 10, 11 and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Appl. Pub. No. 2020/0377315 (Diankov et al. – hereinafter Diankov). Referring to claim 1, Diankov discloses a placement location obtaining method, comprising: obtaining first size information and N pieces of second size information, wherein the first size information indicates a size of an unoccupied area in an accommodation space, N is an integer greater than or equal to 1, and the N pieces of second size information indicate sizes of corresponding N first objects; [See paragraphs 0045, 0054, 0057, 0058, 0120] generating M candidate placement locations based on the first size information and the N pieces of second size information, wherein one of the M candidate placement locations indicates one placement location of a target object in the unoccupied area, the target object is one object in the N first objects, and M is an integer greater than or equal to 1; [See paragraphs 0059-0062, 0069-0073] generating, based on the first size information and the M candidate placement locations by using a first machine learning model, M first score values that are in a one-to-one correspondence with the M candidate placement locations; and [See paragraphs 0054-0056, 0062, 0066, 0069-0073] selecting a first placement location from the M candidate placement locations based on the M first score values. [See paragraphs 0066, 0108] Referring to claim 2, Diankov discloses the method according to claim 1, wherein the first size information is a two-dimensional matrix; [See paragraphs 0061, 0062, Figs. 3A-3C] a quantity of rows of the two-dimensional matrix indicates a first size of a bottom surface of the accommodation space, a quantity of columns of the two-dimensional matrix indicates a second size of the bottom surface of the accommodation space, and if the first size is a length, the second size is a width, or if the first size is a width, the second size is a length; and [See paragraphs 0055, 0061, 0062, 0076, 0099, 0123, 0142, 0159, 0170, Figs. 3A-5B] the bottom surface of the accommodation space is divided into a plurality of first areas, there is no intersection between different first areas, the two-dimensional matrix comprises a plurality of matrix values that are in a one-to-one correspondence with the plurality of first areas, and each matrix value indicates a remaining space of one of the plurality of first areas in a height direction. [See paragraphs 0055, 0061, 0062, 0076, 0099, 0123, 0142, 0159, 0170, Figs. 3A-5B] Referring to claims 10 and 11, they recite similar limitations as set forth in claims 1 and 2, and therefore are rejected based on similar rationale. Referring to claim 18, it recites similar limitations as set forth in claim 1, and therefore is rejected based on similar rationale. Claim 18 discloses a computer-readable storage medium comprising a program to perform the method steps of claim 1. Referring to claim 19, it recites similar limitations as set forth in claim 1, and therefore is rejected based on similar rationale. Claim 19 discloses a circuitry system to perform the method steps of claim 1. Referring to claim 20, it recites similar limitations as set forth in claim 1, and therefore is rejected based on similar rationale. Claim 20 discloses a computer program product with instructions to perform the method steps of claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 3-6, 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Diankov as applied to claims 1 and 10 above, and further in view of U.S. Patent Appl. Pub. No. 2020/0104457 (Chuang et al. – hereinafter Chuang). Referring to claim 3, Diankov discloses the method according to claim 1 above. Diankov does not explicitly disclose the limitations: inputting the first size information and the M candidate placement locations into the first submodel, to obtain M second score values that are output by the first submodel and that are in a one-to-one correspondence with the M candidate placement locations; and [See Diankov paragraphs 0054-0056, 0062, 0066, 0069-0073] generating the M first score values based on the M second score values, the N third score values, and a first correspondence, wherein the first correspondence is a correspondence between the M second score values and the N third score values. [See Diankov paragraphs 0054-0056, 0062, 0066, 0069-0073] Diankov does not explicitly disclose the limitations: wherein the first machine learning model comprises a first submodel and a second submodel, and the generating, based on the first size information and the M candidate placement locations by using a first machine learning model, M first score values that are in a one-to-one correspondence with the M candidate placement locations comprises: inputting N first volumes that are in a one-to-one correspondence with the N first objects into the second submodel, to obtain N third score values that are output by the second submodel and that are in a one-to-one correspondence with the N first objects, wherein one first object comprises at least one second object, and each of the N first volumes is any one of the following: an average volume of the at least one second object, a volume of a largest second object in the at least one second object, or a volume of a smallest second object in the at least one second object. Chuang teaches a method with the limitations: wherein the first machine learning model comprises a first submodel and a second submodel, and the generating, based on the first size information and the M candidate placement locations by using a first machine learning model, M first score values that are in a one-to-one correspondence with the M candidate placement locations comprises: [See paragraphs 0066, 0067, 0094, 0108, 0124, 0127-0129] inputting N first volumes that are in a one-to-one correspondence with the N first objects into the second submodel, to obtain N third score values that are output by the second submodel and that are in a one-to-one correspondence with the N first objects, wherein one first object comprises at least one second object, and each of the N first volumes is any one of the following: an average volume of the at least one second object, a volume of a largest second object in the at least one second object, or a volume of a smallest second object in the at least one second object. [See paragraphs 0054, 0055, 0124, 0128, 0129] It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified the system executing the method of Diankov to have incorporated a machine learning model configuration as in Chuang with the motivation of optimizing the process of product placement within a defined space. [See Chuang paragraph 0025; Diankov paragraphs 0024-0030] Referring to claim 4, the combination of Diankov and Chuang discloses the method according to claim 3, wherein the generating the M first score values based on the M second score values, the N third score values, and a first correspondence comprises: [See Diankov paragraphs 0054-0056, 0062, 0066, 0069-0073] obtaining, from the N third score values based on the first correspondence, at least one third score value corresponding to a target score value, wherein the target score value is any score value in the M second score values; and [See Diankov paragraphs 0054-0056, 0062, 0066, 0069-0073] adding each third score value in the at least one third score value corresponding to the target score value to the target score value, to obtain the first score value. [See Diankov paragraphs 0054-0056, 0062, 0066, 0069-0073] Referring to claim 5, the combination of Diankov and Chuang discloses the method according to claim 3, wherein the inputting the first size information and the M candidate placement locations into the first submodel, to obtain M second score values that are output by the first submodel and that are in a one-to-one correspondence with the M candidate placement locations comprises: [See Diankov paragraphs 0062, 0066, 0099-0102, 0106] performing feature extraction on the first size information by using the first submodel, to obtain first feature information, and connecting the first feature information to each of the M candidate placement locations by using the first submodel, to generate the M second score values. [See Chuang paragraphs 0086, 0089-0091, 0097-0099], Referring to claim 6, the combination of Diankov and Chuang discloses the method according to claim 3, wherein the first submodel is any one of the following neural networks: a deep Q network, a double deep Q network, a dueling double deep Q network, or a nature deep Q network, or the second submodel is a fully connected neural network. [See Chuang paragraph 0053] Referring to claims 12 and 13, they recite similar limitations as set forth in claims 3 and 5, and therefore are rejected based on similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUSEGUN GOYEA whose telephone number is (571)270-5402. The examiner can normally be reached M-F: 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FAHD OBEID can be reached at 5712703324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUSEGUN GOYEA/ Primary Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Aug 16, 2023
Response after Non-Final Action
Dec 27, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591867
SYSTEMS AND METHODS FOR THIRD PARTY PAYMENT AT POINT OF SALE TERMINALS
2y 5m to grant Granted Mar 31, 2026
Patent 12586087
COMPUTING TOOL RISK DISCOVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12579508
System and method for electronically determining correct product placement of items
2y 5m to grant Granted Mar 17, 2026
Patent 12572991
CONGESTION INFORMATION DISPLAY SYSTEM, SENSOR DEVICE, CONGESTION INFORMATION DISPLAY METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12566089
STACKED UNIT DETECTION SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+33.5%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 712 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month