DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is a non-final First Office Action.
This action is in response to correspondence filed on 08/01/2022.
Claims 1-8 are pending and have been considered.
Claims 1-5 are interpreted under 35 U.S.C. 112(f).
Claims 1-5 are rejected under 35 U.S.C. 112(a) and 35 U.S.C. 112(b).
Claims 1-8 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter, a judicial exception, an abstract idea (mental process), without significantly more.
Claim 1, 2, 4, 5, and 6 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1,2,3, 4, 5 respectively of copending Application No. 17/878,482.
Claims 1-8 are rejected under 35 U.S.C. 103 as being obvious over Mirhoseini et al, A graph placement methodology for chip design. Nature, Vol 594, June 2021 in view of Ho et al, US Patent Number 10699043.
Priority
The application claims priority to the REPUBLIC OF KOREA Application KR10-2021-0124864, filed on 09/17/2021. The priority is acknowledged.
Information Disclosure Statement (IDS)
The information disclosure statement (IDS) submitted on 08/01/2022, 08/05/2022, 06/09/2023 is/are in compliance with the provisions of 37 CFR 1.97.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: ‘a simulation engine configured to’ in claim 1, ‘reinforcement learning agent configured to’ in claim 1, a design data unit (configured to) in claim 1; also ‘configuration unit configured to’ in claim 4, and ‘simulation unit configured to’ in claim 4.
The 112(f) interpretation applies to dependent claims 2-5 as well.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1 and 4 are rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because the claim purports to invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, but fails to recite a combination of elements as required by that statutory provision and thus cannot rely on the specification to provide the structure, material or acts to support the claimed function. Claims recite ‘a simulation engine configured to’ in claim 1, ‘reinforcement learning agent configured to’ in claim 1, a design data unit (configured to) in claim 1; also ‘configuration unit configured to’ in claim 4, and ‘simulation unit configured to’ in claim 4. There are no algorithms or description of how these functions are performed. As such, the claim recites a function that has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim.
Dependent claims 2, 3 and 5 inherit the deficiencies of the claims from which they depend and are rejected under the same rationale.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 4 have the limitations: ‘a simulation engine configured to’ in claim 1, ‘reinforcement learning agent configured to’ in claim 1, a design data unit (configured to) in claim 1; also ‘configuration unit configured to’ in claim 4, and ‘simulation unit configured to’ in claim 4 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. There are no algorithms or description of how these functions are performed. Therefore, claims 1, 4 are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Dependent claims 2, 3 and 5 inherit the deficiencies of the claims from which they depend and are rejected under the same rationale.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter, a judicial exception (abstract idea, mental process) and without significantly more.
(S1) Claims 1-8 are each directed to a statutory category of invention: machines (claims 1-5 directed to an apparatus) and process (Claims 6-8 directed to a method)
(S2A1) Claims are analyzed to identify abstract ideas (highlighted in bold font) and additional elements. Paraphrasing is used for simplifying referencing. Claims which have similar limitations, even if not verbatim identical, share the same rationale when analyzed as to being directed to non-statutory subject matter, as follows.
Claim 1, recites a reinforcement learning apparatus for optimizing a position of an object based on design data, the apparatus comprising: (a) a simulation engine (110) configured to
(a1) analyze, based on design data comprising information about all objects, an individual object and position information of the object, (analyzing can be performed in the mind, or, as disclosed in the specification, is currently done by workers ‘by hand’ )page 1, line 23). For example chess pieces in an opening of the game, make an evaluation and a judgement, for example analyze the current position of the Queen on the chessboard. This is a mental process (see MPEP 2106.04(a)(2) III)
(a2) generate simulation data constituting a reinforcement environment in which a predetermined constraint is configured for the analyzed individual object, (setting the mental imagery which creates the context, a mental process, for example chessboard situation with the Queen in a certain corner position limits her moves, but also has nearby fields in which she could capture another piece or pawn in certain zones. This is a mental process (see MPEP 2106.04(a)(2) III)
(a3 ) request optimization information for placing a target object around at least one individual object, (an additional element, data gathering, adding insignificant extra-solution to activity to the judicial exception; see MPEP 2106.04(d), 2106.05(g))
(a4) perform simulation for the placement of the target object, based on state information comprising target object placement information used for reinforcement learning and an action provided from a reinforcement learning agent (120), (a mental visualization and judgement, for example imagining the chessboard when placing a piece on chessboard based on a plan or calculated alternative or a suggestion. This is a mental process (see MPEP 2106.04(a)(2) III)
(a5) provide reward information according to the simulation result as feedback on decision-making of the reinforcement learning agent (120);(‘Apply it’- Mere Instructions To Apply An Exception MPEP 2106.05(f)).
(b) the reinforcement learning agent (120) configured to perform reinforcement learning based on the state information and the reward information provided from the simulation engine (110) to determine an action such that the placement of the target object around the object is optimized; and {making evaluations, judgements and decisions in determining an action that leads to an improvement of a situation. For example, in the context of a chess game, reasoning and determining a move that creates a material (capturing a piece) or strategic advantage (e.g. occupying the center). This can be performed in the mind (or with a helping tool, like moving pieces on a chess board to explore outcome of various move scenarios). This is a mental process (see MPEP 2106.04(a)(2) III)
(c ) a design data unit (130) configured to provide, to the simulation engine (110), the design data comprising the information about the all objects. {(‘Apply it’- Mere Instructions To Apply An Exception MPEP 2106.05(f)).
Claim 1 includes limitations that recite mental processes, paraphrased here as “analyze object and its positional information” (a1), “generating a contextual environment for analysis” (a2) “determine result of placing a target object in calculated superior position in the contextual environment” (a3), “Calculating a superior position for placement based on qualitative knowledge of outcomes in placement” (b). . In broadest reasonable interpretation and in view of the guidance from MPEP 2106.04 II. B, the limitations are considered together as a single abstract idea for further analysis., as a process aimed at: “optimizing the placement of an object for specific context and constraints”. Nothing prevents the process to be performed in the mind, by hand, or with the use of a tool.
Regarding the claim element “simulation agent”, “reinforcement learning agent”, “design data unit” – which are executed in software on a general computer, the courts do not distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. (Mental Processes, i.e., Concepts Performed in the Human Mind grouping of abstract ideas (see MPEP 2106.04(a)(2) III).
Accordingly, claims 1 recites an abstract idea.
(S2A2) The additional elements, individually and in combination, fail integrate the recited judicial exception into a practical application when evaluated using the considerations in MPEP §§ 2106.04(d), 2106.05(a)-(c), (e)-(h) because it does not impose any meaningful limits on practicing the abstract idea.
The additional claim elements are data gathering (a3) and ‘apply it’ (c ). When considered individually or in combination they are not sufficient to integrate the abstract idea into a practical application.
(S2B) Claim 1 does not include additional elements, which individually or in combination amount to significantly more than the judicial exception. As discussed above in S2A2 the additional elements recite data gathering, an insignificant extra solution activity, which, as recited here, at high level of generality, was found by the courts to be WURC. (see MPEP § 2106.05(d)(ll)).
When considered as a whole the additional elements elaborate on the identified abstract idea. It does not practically or significantly alter how the identified abstract idea would be performed. There is no inventive concept beyond the judicial exception, and thus the claim as a whole does not amount to significantly more than the judicial exception itself.
Therefore, it is concluded that claim 1 is ineligible.
Claim 6 , though not verbatim, recites essentially similar limitations, of which the mental processes are highlighted in bold:
a) analyzing, by a simulation engine (110), an individual object and position information of the object when design data comprising information about all objects is uploaded, and generating simulation data constituting a reinforcement environment in which a predetermined constraint is configured for the individual object;
b) when an optimization request for placement of a target object around an individual object based on the simulation data is received from the simulation engine (110), performing, by a reinforcement learning agent (120), reinforcement learning based on reward information and state information comprising target object placement information, which is collected from the simulation engine (110) and used for the reinforcement learning, to determine an action such that the placement of the target object is optimized; and
c) performing, by the simulation engine (110), simulation for configuring a reinforcement environment for the placement of the target object, based on an action provided from the reinforcement learning agent (120), and providing, to the reinforcement learning agent (120), reward information according to the result of performing the simulation as feedback on decision-making of the reinforcement learning agent (120) and the state information comprising the target object placement information used for reinforcement learning,
wherein the reward information in operation c) is calculated based on a distance between an object and a target object or a position of the target object.
Limitation (a) is similar to the combination of limitations (a1) and (a2) in claim 1 and recites mental processes.
Limitation (b) bolded elements are similar to limitation (c) in claim 1 and recite a mental process.
Limitation( c) bolded elements are similar to limitations (a4) in claim 1.
Re claim elements ‘by a/the simulation engine’ are similar to the recitation “by a computer’ ; the courts do not distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. (Mental Processes, i.e., Concepts Performed in the Human Mind grouping of abstract ideas (see MPEP 2106.04(a)(2) III).
Claim 6 recites the same mental process of “optimizing the placement of an object for specific context and constraints”. Nothing prevents the process to be performed in the mind, using pen and paper, or with the use of a tool. The use of a simulation engine, computer or similar aid tool for implementation of any unit that executes the process does not prevent the process to be a mental process. Thus, the claim recites an abstract idea.
When analyzed under Step 2A prong 2 and Step 2B the reasoning is the same as in Claim 1. When analyzed individually and in combination the additional elements of the claim amount simply to data gathering and apply it and thus fail to integrate the abstract idea into a practical application. When analyzed individually and in combination, and analyzing the claim as a whole, the additional limitations are similar to limitations considered by the courts to be WURC, do not provide an inventive concept outside of the judicial exception, and thus fail to provide significantly more.
Claim 6 is thus found to be directed to a judicial exception (abstract idea – mental process) without significantly more. Claim 6 is not eligible under 35 USC 101.
Claim 4: recites further limitations analyzed as follows:
wherein the simulation engine (110) comprises:
a reinforcement learning environment configuration unit (111) configured to
analyze, based on design data comprising information about all objects, an individual object and position information of the object, (a process of evaluation and judgement., For example, similar to a chessboard analysis; can be performed in the mind or with assistance of pen and paper, this is a mental process)
generate a predetermined constraint and simulation data constituting a reinforcement environment for the individual object, (a process of evaluation, judgement and decision-making. For example a person can determine certain rules of the chess game, chessboard cases for analysis and study, position/placement of pieces, etc – these can all be performed in the mind or with assistance of pen and paper, this is a mental process)
and make, based on the simulation data, a request to the reinforcement learning agent (120) for optimization information for placing a target object around at least one individual object; (‘Apply It) and
a simulation unit (112) configured to perform, based on an action received from the reinforcement learning agent, simulation for configuring a reinforcement learning environment for the placement of the target object, (a process of evaluation and judgement. For example similar to a chess game, and an action intent to move a piece in a certain location one can mentally visualize the context, the move and its consequence including how the chessboard will look after placement, and what consequences it would attract - can be performed in the mind or with assistance of pen and paper, this is a mental process)
and provide, to the reinforcement learning agent (120), reward information and state information comprising target object placement information used for reinforcement learning.(Apply It’)
The bolded claim elements recite mental processes, which reinforce the abstract idea in the independent claim. The claim recites an abstract idea and the additional elements in the claim are limitations of data manipulation and ‘apply it’, which are insignificant extra elements that do not integrate the claim in a practical application. The claim is thus directed to a judicial exception.
Found by the courts to be WURC, these additional elements , individually and in combination and considering the claim as a whole, the additional elements fail to provide ‘significantly more’.
Claim 4 is thus found directed to a judicial exception, without significantly more, which makes it ineligible under 35 USC 101.
Claim 5 recites wherein the reward information is calculated based on a distance between an object and a target object or a position of the target object. The claim recites a mental process (which involves a mathematical concept). For example a person could calculate in the mind a reward based on distance. There are no additional claim elements to integrate in a practical application or provide significantly more. Claim 5 is thus found directed to a judicial exception, without significantly more, which makes it ineligible under 35 USC 101.
Claim 8 recites further comprising converting the simulation data in operation a) into an extensible markup language (XML) file such that the simulation data is used through a web. The limitation is mere instructions to apply an exception. It cannot significantly alter the judicial exception. It does not integrate the judicial exception into a practical application, nor does it provide an inventive concept. Claim 8 is thus found ineligible under 35 USC 101.
Claim 2 (representative of claim 7, which has similar limitations), and claim 3 recite
(Claims 2, 7): wherein the design data is semiconductor design data comprising CAD data or netlist data.
(Claim 3): wherein an application program visualized through a web is additionally installed in the simulation engine (110).
These further elements in the dependent claims do not perform any claimed method steps. The nature, form or structure of the other claim elements themselves do not practically or significantly alter how the identified abstract idea would be performed and do not provide more than a general link to a technological environment. They describe the nature, structure and/or content of other claim elements design data and simulation engine– and as such, cannot change the nature of the identified abstract idea from a judicial exception into eligible subject matter, because they do not represent significantly more (see MPEP 2106.07).
Therefore, claims 2, 3, 7 are deemed ineligible under 35 USC 101.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1,2, 4, 5, and 6 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1,2,3, 4, 5 respectively of copending Application No. 17/878,482. Although the claims at issue are not identical, they are not patentably distinct from each other because the differences do no impart patentable distinctness. For example the additional limitations of claim 1, e.g. adding a ‘color’ or ‘from a user terminal’ merely represents a narrowing/obvious variation of the broader limitation “ as recited in claim 1 of the instant application. Because the broader claim already encompasses the narrow species under BRI, the difference does no impart patentable distinctness.
The claims are presented for comparison below.
1. A reinforcement learning apparatus for optimizing a position of an object based on design data, the apparatus comprising:
a simulation engine (110) configured to
analyze, based on design data comprising information about all objects, an individual object and position information of the object,
generate simulation data constituting a reinforcement environment in which a predetermined constraint is configured for the analyzed individual object,
request optimization information for placing a target object around at least one individual object,
perform simulation for the placement of the target object, based on state information comprising target object placement information used for reinforcement learning and an action provided from a reinforcement learning agent (120), and provide reward information according to the simulation result as feedback on decision-making of the reinforcement learning agent (120); the reinforcement learning agent (120) configured to perform reinforcement learning based on the state information and the reward information provided from the simulation engine (110) to determine an action such that the placement of the target object around the object is optimized; and a design data unit (130) configured to provide, to the simulation engine (110), the design data comprising the information about the all objects.
1. A user learning environment-based reinforcement learning apparatus, the apparatus comprising: a simulation engine (210) configured to set a customized reinforcement learning environment by analyzing, based on design data including entire object information, an individual object and location information of the object, and adding a color, a constraint, and location change information to the analyzed object for each object based on setting information input from a user terminal (UT) (100), to perform reinforcement learning based on the customized reinforcement learning environment, to provide state information of the customized reinforcement learning environment and reward information associated with a simulated disposition of a target object as a feedback to a decision made by a reinforcement learning agent (220), wherein simulation is performed based on an action determined so that the disposition of the target object around at least one individual object is optimized; and the reinforcement learning agent (220) configured to determine an action so that a disposition of a target object to be disposed around the object is optimized by performing reinforcement learning based on the state information and the reward information provided from the simulation engine (210).
2. The apparatus of claim 1, wherein the design data is semiconductor design data comprising CAD data or netlist data.
2. The apparatus of claim 1, wherein the design data is semiconductor design data including CAD data or netlist data.
4. The apparatus of claim 1,
wherein the simulation engine (110) comprises: a reinforcement learning environment configuration unit (111) configured to analyze, based on design data comprising information about all objects, an individual object and position information of the object,
generate a predetermined constraint and simulation data constituting a reinforcement environment for the individual object, and make, based on the simulation data, a request to the reinforcement learning agent (120) for optimization information for placing a target object around at least one individual object; and
a simulation unit (112) configured to
perform, based on an action received from the reinforcement learning agent, simulation for configuring a reinforcement learning environment for the placement of the target object, and
provide, to the reinforcement learning agent (120), reward information and state information comprising target object placement information used for reinforcement learning
3. The apparatus of claim 1, wherein the simulation engine (210) comprises: an environment setting unit (211) configured to set a customized reinforcement learning environment by adding a color, a constraint, and location change information for each object based on setting information input from the UT (100); a reinforcement learning environment configuration unit (212) configured to produce simulation data for configuring a customized reinforcement learning environment by analyzing, based on the design data including the entire object information, an individual object and location information of the object, and adding a color, a constraint, and location change information which is set by the environment setting unit (211) for each individual object, and to request, from the reinforcement learning agent (220) based on the simulation data, optimization information for a disposition of a target object around at least one individual object; and a simulation unit (213) configured to perform simulation that configures a reinforcement learning environment associated with a disposition of a target object based on an action received from the reinforcement agent (220), and to provide state information that includes disposition information of a target object to be used for reinforcement learning and reward information to the reinforcement learning agent (220).
5. The apparatus of claim 4, wherein the reward information is calculated based on a distance between an object and a target object or a position of the target object.
4. The apparatus of claim 3, wherein the reward information is calculated based on a distance between an object and a target object or the location of the target object.
6. A reinforcement learning method for optimizing a position of an object based on design data, the method comprising:
a) analyzing, by a simulation engine (110), an individual object and position information of the object when design data comprising information about all objects is uploaded, and
generating simulation data constituting a reinforcement environment in which a predetermined constraint is configured for the individual object;
b) when an optimization request for placement of a target object around an individual object based on the simulation data is received from the simulation engine (110), performing, by a reinforcement learning agent (120), reinforcement learning based on reward information and state information comprising target object placement information, which is collected from the simulation engine (110) and used for the reinforcement learning, to determine an action such that the placement of the target object is optimized; and
c) performing, by the simulation engine (110), simulation for
configuring a reinforcement environment for the placement of the target object, based on an action provided from the reinforcement learning agent (120), and
providing, to the reinforcement learning agent (120), reward information according to the result of performing the simulation as feedback on decision-making of the reinforcement learning agent (120) and the state information comprising the target object placement information used for reinforcement learning,
wherein the reward information in operation c) is calculated based on a distance between an object and a target object or a position of the target object.
5. A reinforcement learning method comprising: a) a reinforcement learning server (200) receives design data including entire object information from a user terminal (UT) (100); b) the reinforcement learning server (200) sets a customized reinforcement learning environment by analyzing an individual object and location
information of the object, and adding a color, a constraint, and location change information to the analyzed object for each object based on setting information input from the UT (100); c) the reinforcement learning server (200) performs reinforcement learning based on state information of the customized reinforcement learning environment that includes disposition information of a target object to be used for reinforcement learning by a reinforcement learning agent, and reward information, so as to determine an action so that a disposition of a target object around at least one individual object is optimized; and d) the reinforcement learning server (200) performs, based on the action, simulation that configures a reinforcement learning environment associated with a disposition of the target object, and produces reward information based on a result of the performed simulation as a feedback to a decision made by the reinforcement learning agent, wherein the reward information in d) is calculated based on a distance between an object and the target object or a location of the target object.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows:
i. Determining the scope and contents of the prior art.
ii. Ascertaining the differences between the prior art and the claims at issue.
iii. Resolving the level of ordinary skill in the pertinent art.
iv. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8 are rejected under 35 U.S.C. 103 as being obvious over of Mirhoseini et al, A graph placement methodology for chip design. Nature, Vol 594, June 2021, hereinafter MIR in view of Ho et al, US Patent Number 10699043, hereinafter HO.
. It is worth mentioning that both references present the work from the same team at Google LLC. MIR acknowledges the US patent 10699043 in the article. The more recent publication in Nature (MIR) has novel aspects, however, due to the type of publication, the patent goes in more details on some other aspects that are irrelevant for a Nature paper paper focused on advancements (e.g. the patent covers the use of the system over internet/web, which is a common aspect of all electronic design automation (EDA) for decades.)
Claims 1 and 6, 2 and 7, 3 and 8, though not verbatim, share essentially similar limitations and are analyzed together. The analysis is doe with the recitation of the apparatus claims.
Regarding Claim 1, MIR discloses a reinforcement learning apparatus for optimizing a position of an object based on design data, the apparatus comprising { p. 208, left col, middle] We then use this architecture as the encoder of our policy and value networks to enable transfer learning. In our experiments, we show that, as our agent is exposed to a greater volume and variety of chips, it becomes both faster and better at generating optimized placements for new chip blocks}
a simulation engine (110) configured to analyze, based on design data comprising information about all objects, an individual object and position information of the object, generate simulation data constituting a reinforcement environment in which a predetermined constraint is configured for the analyzed individual object, request optimization information for placing a target object around at least one individual object, perform simulation for the placement of the target object, based on state information comprising target object placement information used for reinforcement learning and an action provided from a reinforcement learning agent (120), provide reward information according to the simulation result as feedback on decision-making of the reinforcement learning agent (120);{[p 207, right col, middle] … commercial electronic design automation (EDA) tools …waiting up to 72 h for EDA tools to evaluate that placement. (simulation engine as the EDA tools that perform simulations). [Detailed Methodology] Our goal is to minimize PPA, subject to constraints on routing congestion and density. (the predetermined constraints) Our true reward is the output of a commercial EDA tool, including wirelength, routing congestion, density, power, timing and area. (simulation data constituting the reinforcement environment as the output of the EDA tool, reward as reward).
the reinforcement learning agent (120) configured to perform reinforcement learning based on the state information and the reward information provided from the simulation engine (110) to determine an action such that the placement of the target object around the object is optimized; and { [First sentence] In this work, we propose a new graph placement method based on reinforcement learning (RL),
PNG
media_image1.png
120
625
media_image1.png
Greyscale
Fig. 1 | Overview of our method and training regimen. In each training iteration, the RL agent places macros one at a time (actions, states and rewards are denoted by ai, si and ri, respectively). Once all macros are placed, the standard cells are placed using a force-directed method. The intermediate rewards are zero. The reward at the end of each iteration is calculated as a linear combination of the approximate wirelength, congestion and density, and is provided as feedback to the agent to optimize its parameters for the next iteration.} (in this art, force-directed methods consider placement of a component based on nearby components which by analogy with a force field ‘push it’ away, to avoid congestion.)
MIR also discloses the design data with information about the objects., for example [p 2-8 right col, middle] The dataset of netlists of size K is denoted by G, with each individual netlist in the dataset written as g; p207 second parag] A computer chip is divided into dozens of blocks, each of which is
an individual module, such as a memory subsystem, compute unit or control logic system. These blocks can be described by a netlist, a hypergraph of circuit components, such as macros (memory components) and standard cells (logic gates such as NAND, NOR and XOR), all of which are connected by wires. Chip floorplanning involves placing netlists onto chip canvases (two-dimensional grids) so that performance metrics (for example, power consumption, timing, area and wirelength) are optimized, while adhering to hard constraints on density and routing congestion. }
MIR does not explicitly disclose a design data unit . However, Ho discloses
a design data unit (130) configured to provide, to the simulation engine (110), the design data comprising the information about the all objects.{ (6) FIG. 1 shows an example floorplan generation system 100. The floorplan generation system 100 is an example of a system implemented as computer programs on one or more computers in one or more locations in which the systems, components, and techniques described below are implemented. (7) The system 100 receives netlist data 102 for a computer chip} Block 102 interpreted a design data unit.
PNG
media_image2.png
640
538
media_image2.png
Greyscale
It would have been obvious for a person skilled in the art at the time of the invention to include the elements of H. It would have been motivated to do so to obtain the advantage of a simple representation for explanatory purposes and for partitioning a software development into separated design units. A POSITA would also be motivated when reading MIR to learn and apply HO as MIR indicates HO in the not at the end of the article, note reproduce below.
PNG
media_image3.png
50
528
media_image3.png
Greyscale
At high level MIR and HO describe essentially the same technology - of reinforcement learning application to placement/floorplanning. Combining their features would be reasonable, according to one of ordinary skill in the art. Moreover, since the elements disclosed by MIR andHO would function in the same manner in combination as they do in their separate embodiments, it would be reasonable to conclude that the results of the combination would be predictable. Accordingly, the claimed subject matter would have been obvious over MIR in view of HO.
Re claim 2,7 which share similar limitations: MIN/HO discloses the limitations of claim 1. MIN further discloses:
wherein the design data is semiconductor design data comprising CAD data or netlist data. {[p 2-8 right col, middle] The dataset of netlists of size K is denoted by G, with each individual netlist in the dataset written as g; p207 second parag] A computer chip is divided into dozens of blocks, each of which is an individual module, such as a memory subsystem, compute unit or control logic system. These blocks can be described by a netlist, a hypergraph of circuit components, such as macros (memory components) and standard cells (logic gates such as NAND, NOR and XOR), all of which are connected by wires. }
Re claim 3 MIN/HO discloses the limitations of claim 1. MIN does not disclose, however HO discloses:
wherein an application program visualized through a web is additionally installed in the simulation engine (110). {see at least (94) In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser. }
In addition, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of MIT and HO One would have been motivated to do so, to obtain the advantage of running computationally intensive applications on powerful machines to which a user would connect, since placement operations are notorious computationally intensive. In fact the industry (Cadence, Mantor Graphics, etc) has been doing exactly that for a quarter of a century. In some conditions the motivation isnot only the need of a powerful machine, but also access to various proprietary models and ultimately to proprietary simulation algorithms which are safer protected in a server at vendor’s machine then loaded on a customer computer. Combining their features would be reasonable, according to one of ordinary skill in the art. Moreover, since the elements disclosed by MIR and HO would function in the same manner in combination as they do in their separate embodiments, it would be reasonable to conclude that the results of the combination would be predictable. Accordingly, the claimed subject matter would have been obvious over MIR in view of HO.
Re claim 3 MIN/HO discloses the limitations of claim 1. MIN does not disclose, however HO discloses:
further comprising converting the simulation data in operation a) into an extensible markup language (XML) file such that the simulation data is used through a web. {See at least (86) Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, (88) A computer program, which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages . A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language}
In addition, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of MIT and HO One would have been motivated to do so, to obtain the advantage of running computationally intensive applications on powerful machines to which a user would connect, since placement operations are notorious computationally intensive. In fact the industry (Cadence, Mantor Graphics, etc) has been doing exactly that for a quarter of a century. In order for the simulation data to run over the