Prosecution Insights
Last updated: April 19, 2026
Application No. 18/789,923

CONTROL SYSTEM, CONTROL METHOD, AND STORAGE MEDIUM

Non-Final OA §101§103§112
Filed
Jul 31, 2024
Examiner
ALSOMAIRY, IBRAHIM ABDOALATIF
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
40%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
49%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
33 granted / 82 resolved
-11.8% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
125
Total Applications
across all art units

Statute-Specific Performance

§101
14.7%
-25.3% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 82 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Non-Final Action on the Merits. Claims 1-9 are currently pending and are addressed below. Information Disclosure Statement The information disclosure statement (IDS) submitted on July 31st, 2024 and December 4th, 2024 has been considered and entered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a conversion information storing unit configured to convert” in at least claim 1 “an association unit configured to associate” in at least claim 1 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The published specification provides corresponding structure for the conversion information storing unit in at least paragraph 57. The published specification fails to provide a corresponding structure for the association unit. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 and 8 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under section 112(a). See MPEP § 2163.03, subsection VI”. See MPEP 2181 II B. Claims 2-7 are rejected due to their dependence on rejected independent claim 1. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitation “an association unit” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification fails to provide sufficient disclosure for the corresponding limitation. If the specification fails to disclose sufficient corresponding structure, materials, or acts that perform the entire claimed function, then the claim limitation is indefinite because the applicant has in effect failed to particularly point out and distinctly claim the invention as required by 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. In re Donaldson Co., 16 F.3d 1189, 1195, 29 USPQ2d 1845, 1850 (Fed. Cir. 1994) (en banc). See MPEP 2163.03, section VI. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. In sum, claims 1-9 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception to patentability (i.e., a law of nature, a natural phenomenon, or an abstract idea) and do not include an inventive concept that is something “significantly more” than the judicial exception under the January 2019 patentable subject matter eligibility guidance (2019 PEG) analysis which follows. Under the 2019 PEG step 1 analysis, it must first be determined whether the claims are directed to one of the four statutory categories of invention (i.e., process, machine, manufacture, or composition of matter). Applying step 1 of the analysis for patentable subject matter to the claims, it is determined that the claims are directed to the statutory category of a process. Therefore, we proceed to step 2A, Prong 1. Revised Guidance Step 2A – Prong 1 Under the 2019 PEG step 2A, Prong 1 analysis, it must be determined whether the claims recite an abstract idea that falls within one or more designated categories of patent ineligible subject matter (i.e., organizing human activity, mathematical concepts, and mental processes) that amount to a judicial exception to patentability. Here, with respect to independent claims 1 and 8-9 , the claims recite the abstract idea of generating route information for a mobile object, and mentally determine ”wherein the control unit generates route information relating to a movement route of the mobile object on the basis of the space information acquired from the conversion information storing unit and the type information of the mobile object” where these claims fall within one or more of the three enumerated 2019 PEG categories of patent ineligible subject matter, specifically, a mental process, that can be performed in the human mind since each of the above steps could alternatively be performed in the human mind or with the aid of pen and paper. This conclusion follows from CyberSource Corp. v. Retail Decisions, Inc., where our reviewing court held that section 101 did not embrace a process defined simply as using a computer to perform a series of mental steps that people, aware of each step, can and regularly do perform in their heads. 654 F.3d 1366, 1373 (Fed. Cir. 2011); see also In re Grams, 888 F.2d 835, 840–41 (Fed. Cir. 1989); In re Meyer, 688 F.2d 789, 794–95 (CCPA 1982); Elec. Power Group, LLC v. Alstom S.A., 830 F. 3d 1350, 1354–1354 (Fed. Cir. 2016) (“we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category”). Additionally, mental processes remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. See CyberSource, 654 F.3d at 1375 (“That purely mental processes can be unpatentable, even when performed by a computer, was precisely the holding of the Supreme Court in Gottschalk v. Benson.”). These limitations, as drafted, are a simple process that under their broadest reasonable interpretation, covers the performance of the limitations of the mind. For example, the claim limitation encompasses mentally generating route information for a mobile object based off of the information provided by the car’s sensors while traveling, or alternatively, mentally generating route information for a mobile object based on observations by a human. For example, a human could mentally and with the aid of pen and paper generate route information for a mobile object. Revised Guidance Step 2A – Prong 2 Under the 2019 PEG step 2A, Prong 2 analysis, the identified abstract idea to which the claim is directed does not include limitations that integrate the abstract idea into a practical application, since the additional elements of a control unit, a conversion information storing unit, and an association unit are merely generic components used as a tool (“apply it”) to implement the abstract idea. (See, e.g., MPEP §2106.05(f)). See Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”) In addition, the limitation “a conversion information storing unit configured to convert space information including information relating to a type of object present in a space defined using a first reference system and information relating to a time into a format in association with a unique identifier and storing the format, wherein the conversion information storing unit is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, the control system further comprising an association unit configured to associate the second reference system with the first reference system” constitutes insignificant presolution activity that merely gathers data and, therefore, do not integrate the exception into a practical application. See In re Bilski, 545 F.3d 943, 963 (Fed. Cir. 2008) (en banc), aff' d on other grounds, 561 U.S. 593 (2010) (characterizing data gathering steps as insignificant extra-solution activity); see also CyberSource, 654 F.3d at 1371–72 (noting that even if some physical steps are required to obtain information from a database (e.g., entering a query via a keyboard, clicking a mouse), such data-gathering steps cannot alone confer patentability); OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accord Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(g)). In addition, merely “[u]sing a computer to accelerate an ineligible mental process does not make that process patent-eligible.” Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1279 (Fed. Cir. 2012); see also CLS Bank Int’l v. Alice Corp. Pty. Ltd., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) (“simply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.”), aff’d, 573 U.S. 208 (2014). Accordingly, the additional element of a processor does not transform the abstract idea into a practical application of the abstract idea. Revised Guidance Step 2B Under the 2019 PEG step 2B analysis, the additional elements are evaluated to determine whether they amount to something “significantly more” than the recited abstract idea. (i.e., an innovative concept). Here, the additional elements, such as: a control unit, a conversion information storing unit, and an association unit does not amount to an innovative concept since, as stated above in the step 2A, Prong 2 analysis, the claims are simply using the additional elements as a tool to carry out the abstract idea (i.e., “apply it”) on a computer or computing device and/or via software programming. (See, e.g., MPEP §2106.05(f)). The additional elements are specified at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. (See, e.g., MPEP §2106.05 I.A.). See Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). Thus, these elements, taken individually or together, do not amount to “significantly more” than the abstract ideas themselves. The additional elements of the dependent claims 2-7 merely refine and further limit the abstract idea of the independent claims and do not add any feature that is an “inventive concept” which cures the deficiencies of their respective parent claim under the 2019 PEG analysis. None of the dependent claims considered individually, including their respective limitations, include an “inventive concept” of some additional element or combination of elements sufficient to ensure that the claims in practice amount to something “significantly more” than patent-ineligible subject matter to which the claims are directed. The elements of the instant claimed invention, when taken in combination do not offer substantially more than the sum of the functions of the elements when each is taken alone. The claims as a whole, do not amount to significantly more than the abstract idea itself because the claims do not effect an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of an electronic device itself which implements the abstract idea (e.g., the general purpose computer and/or the computer system which implements the process are not made more efficient or technologically improved); the claims do not perform a transformation or reduction of a particular article to a different state or thing (i.e., the claims do not use the abstract idea in the claimed process to bring about a physical change. See, e.g., Diamond v. Diehr, 450 U.S. 175 (1981), where a physical change, and thus patentability, was imparted by the claimed process; contrast, Parker v. Flook, 437 U.S. 584 (1978), where a physical change, and thus patentability, was not imparted by the claimed process); and the claims do not move beyond a general link of the use of the abstract idea to a particular technological environment Accordingly, claims 1-9 are rejected under 35 USC 101 as being drawn to an abstract idea without significantly more, and thus are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Nakai (US 20210397191 A1) (“Nakai”) in view of Lasang (US 20210084312 A1) (“Lasang”). With respect to claim 1, Nakai teaches a control system comprising: a control unit configured to give a control instruction to at least one or more mobile object; and a conversion information storing unit configured to convert space information including information relating to a type of object present in a space defined using a first reference system and information relating to a time into a format in association with a unique identifier and storing the format (See at least Nakai FIGS. 2-4 and Paragraphs 98-105 “First, the moving space is scanned by the moving body 10 on the basis of the map information stored in advance (step 101). Then, path information in which a traveling path and a time stamp are linked to each other is generated. Note that the traveling path corresponds to information of a passing position, and the time stamp corresponds to information of a passing time. This processing corresponds to “SCAN INNER PORTION OF MOVING SPACE IN ADVANCE” illustrated in FIG. 3 … The security camera system 137 generates and outputs image capturing information including a captured moving image in which a time stamp indicating an image capturing time is linked as metadata to each frame. The captured moving image corresponds to the captured image, and the time stamp corresponds to the image capturing time. Image capturing information (captured moving image) of a time zone in which the moving space has been scanned in advance by the moving body 10 is loaded (step 102). The loaded image capturing information is output to the image verification unit 201, and verification of the captured image is performed. In the present embodiment, the captured image in which an image of the moving body 10 is captured and the image capturing time of the captured image are detected from the image capturing information. In other words, a frame in which the image of the moving body 10 is captured and a time stamp of the frame are detected. This processing corresponds to “RECOGNIZE ROBOT ITSELF IN MOVING IMAGE” illustrated in FIG. 3. A method of detecting the moving body 10 from the frame is not limited, and may be any technology. For example, any image recognition technology such as matching processing, edge detection, projection conversion, or the like, using a model image of the moving body 10 may be used. In order to detect the moving body 10, any machine learning algorithm using, for example, a deep neural network (DNN) or the like may be used. For example, by using artificial intelligence (AI) or the like that performs deep learning, it is possible to improve detection accuracy of the moving body 10.” | Paragraphs 118-120 “As illustrated in FIG. 5, the image verification unit 201 performs a check for every frame from the image capturing information of the security camera system 137 projecting the moving body 10 that has moved within the moving space 20 at the time of pre-scanning, and verifies an image and a time in which the moving body (hereinafter, may be referred to a host device) 10 is projected (step 201). The past path information 11 held by the past path holding unit 202 and the time in which the host device is projected in the image capturing range 12, which is verified in step 201, are collated with each other (step 202). The cost map generation unit 203 detects the image capturing range 12 of the security camera system 137 on the basis of the time in which the host device is projected and the corresponding path information 11. In step 203, as illustrated in FIG. 6A, the detected image capturing range 12 is mapped onto a map by the cost map generation unit 203, such that a cost map 30 in which a cost of the surrounding of a path through which the host device has passed is set to be low is generated. The cost map holding unit 204 holds the generated cost map 30. In other words, in the present embodiment, the cost map 30 is calculated so that a cost regarding movement is lower inside the image capturing range 12 than outside the image capturing range 12.”), wherein the control unit generates route information relating to a movement route of the mobile object on the basis of the space information acquired from the conversion information storing unit and the type information of the mobile object (See at least Nakai FIGS. 6 and 8-10 and Paragraph 120 “In step 203, as illustrated in FIG. 6A, the detected image capturing range 12 is mapped onto a map by the cost map generation unit 203, such that a cost map 30 in which a cost of the surrounding of a path through which the host device has passed is set to be low is generated. The cost map holding unit 204 holds the generated cost map 30. In other words, in the present embodiment, the cost map 30 is calculated so that a cost regarding movement is lower inside the image capturing range 12 than outside the image capturing range 12.”). Nakai fails to explicitly disclose wherein the conversion information storing unit is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, wherein the conversion information storing unit is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format. Lasang teaches wherein the conversion information storing unit is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, wherein the conversion information storing unit is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format (See at least Lasang FIG. 113 and Paragraphs 829-835 “The following describes a configuration for encoding a geo-referenced point cloud. FIG. 113 is a diagram illustrating a configuration of three-dimensional data encoding device 2910 according to this embodiment. This configuration is applied when coordinates conversion is required. Three-dimensional data encoding device 2910 includes geometric information encoder 2911, geographic information encoder 2912, and bitstream generator 2913. Geometric information encoder 2911 includes octree builder 2914 and entropy encoder 2915. Geographic information encoder 2912 includes coordinates converter 2916, and entropy encoder 2917. Octree builder 2914 builds an octree using geometric information (geometric coordinates) included in an input point cloud. Octree builder 2914 also generates an occupancy code of each node of the octree. The geometric information is, for example, data obtained using LiDAR. Here, entropy encoder 2915 entropy encodes geometric information (e.g., occupancy codes generated by octree builder 2914) to generate a bitstream (encoded data) of the geometric information. Entropy encoder 2915 entropy encodes the geometric information using a coding table different from the one used for entropy encoding geographic information (geographic coordinates). Coordinates converter 2916 converts geographic information (latitude, longitude, and altitude) of a vehicle, which is obtained from a satellite (e.g., a GPS satellite), to world coordinates (e.g., a coordinate system with respect to a reference point in a country) or local coordinates (e.g., a coordinate system using a vehicle location as a reference). In the aforementioned first mode, coordinates converter 2916 converts the geometric coordinates of three-dimensional points to world coordinates using the world coordinates of the vehicle which are generated through the conversion, and then converts the obtained world coordinates of the three-dimensional points to geographic coordinates. Entropy encoder 2917 entropy encodes the geographic coordinates generated by coordinates converter 2916 to generate a bitstream (encoded data) of the geographic information. Here, entropy encoder 2917 entropy encodes the geographic information using a coding table different from the one used for entropy encoding the geometric information. Bitstream generator 2913 generates a bitstream including the bitstream of geometric information and the bitstream of geographic information. Note that when geographic coordinates (latitude, longitude, altitude) are added to an input point cloud, coordinates converter 2916 may convert the geographic coordinates to world coordinates and entropy encoder 2917 may encode the world coordinates obtained through the conversion.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakai to include wherein the conversion information storing unit is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, wherein the conversion information storing unit is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, as taught by Lasang as disclosed above, in order to ensure accurate and efficient object detection (Lasang Paragraph 8 “The present disclosure provides a three-dimensional data encoding method, a three-dimensional data decoding method, a three-dimensional data encoding device, and a three-dimensional data decoding device capable of reducing the amount of processing performed by a decoding device.”). With respect to claim 2, Nakai in view of Lasang teach that the association unit associates the second reference system with the first reference system by registering reference parameters of the second reference system in the first reference system (See at least Lasang Paragraph 841 “Coordinates converter 2927 converts world coordinates or local coordinates to geographic coordinates using the decoded geographic information. In the above-described case B, for example, the geographic information of a vehicle is entropy decoded. Coordinates converter 2927 converts the decoded geographic information of the vehicle to world coordinates. Subsequently coordinates converter 2927 converts the geometric coordinates of three-dimensional points to world coordinates using the world coordinates of the vehicle which are generated through the conversion, and then converts the obtained world coordinates of the three-dimensional points to geographic coordinates.”). With respect to claim 3, Nakai in view of Lasang teach that the reference parameters include at least a position of an origin, setting specifications of coordinate axes, an assignment rule for unique identifiers, a reference position of a divided space, and an estimation method for a self-position for the reference system represented by the reference parameters (See at least Lasang Paragraphs 860-863 “Alternatively, the three-dimensional data decoding device decodes geographic information (S2924). The three-dimensional data decoding device then converts the geographic information from world coordinates (X, Y, Z) to geographic coordinates (φ, λ, h) (S2925). In the above-described case B, for example, the geographic information of a vehicle is decoded. The three-dimensional data decoding device converts the decoded geographic information of the vehicle to world coordinates. Subsequently, the three-dimensional data decoding device converts the geometric coordinates of each of three-dimensional points to world coordinates using the world coordinates of the vehicle which are generated through the conversion, and then converts the obtained world coordinates of the three-dimensional points to geographic coordinates. When the bitstream includes the geographic coordinates of the three-dimensional point clouds, the conversion (S2925) need not be performed. Note that when a bitstream includes encoded world coordinates, the three-dimensional data decoding device may decode the bitstream to decode the world coordinates, and convert the world coordinates to geographic coordinates. Lastly the three-dimensional data decoding device combines the decoded point cloud with the geographic information (S2926). As described above, the three-dimensional data encoding device according to this embodiment performs the process illustrated in FIG. 121. The three-dimensional data encoding device encodes three-dimensional points obtained by a sensor. The three-dimensional data encoding device encodes local coordinate information indicating sets of local coordinates that are coordinates of the three-dimensional points and are dependent on the location of the sensor (S2931), and generates an encoded bitstream including the encoded local coordinate information (e.g. geometric information 2905) and global coordinate information (e.g., geographic information 2906 or 2906A) indicating global coordinates that are coordinates of a reference point or at least one of the three-dimensional points and are independent from the location of the sensor (S2932).”). With respect to claim 4, Nakai in view of Lasang teach that the first reference system and the second reference system have a hierarchical structure (See at least Lasang Paragraph 169 “Also note that voxels with a hierarchical structure may be used. In such a case, when the hierarchy includes n levels, whether a sampling point is included in the n−1th level or lower levels (levels below the n-th level) may be sequentially indicated. For example, when only the n-th level is decoded, and the n−1th level or lower levels include a sampling point, the n-th level can be decoded on the assumption that a sampling point is included at the center of a voxel in the n-th level.”). With respect to claim 5, Nakai in view of Lasang teach that the association unit associates the first reference system with the second reference system by registering reference parameters of the second reference system in the first reference system and registering the reference parameters of the first reference system in the second reference system (See at least Lasang Paragraph 841 “Coordinates converter 2927 converts world coordinates or local coordinates to geographic coordinates using the decoded geographic information. In the above-described case B, for example, the geographic information of a vehicle is entropy decoded. Coordinates converter 2927 converts the decoded geographic information of the vehicle to world coordinates. Subsequently coordinates converter 2927 converts the geometric coordinates of three-dimensional points to world coordinates using the world coordinates of the vehicle which are generated through the conversion, and then converts the obtained world coordinates of the three-dimensional points to geographic coordinates.”). With respect to claim 6, Nakai in view of Lasang teach that the first reference system and the second reference system include at least one of a coordinate system defined using latitude/longitude/height, an arbitrary XYZ coordinate system, an MGRS, a pixel coordinate system, and a tile coordinate system (See at least Nakai Paragraph 826 “FIG. 112 is a diagram illustrating an example of a structure of a bitstream in the second mode. Geographic information 2906A indicates global coordinates associated with a set of three-dimensional points. The global coordinates may be geographic coordinates (latitude and longitude coordinates) or world coordinates (Cartesian coordinates). In the example illustrated in FIG. 112, mode flag 2902 indicates the second mode.”). With respect to claim 7, Nakai in view of Lasang teach that a gap between pieces of position point group data configuring the movement route is configured to be adjustable (See at least Nakai FIG. 6 and Paragraphs 153-158 “The planning unit 134 calculates the movement plan so that the moving body passes through the low cost region 42 in which a total cost is low. It is assumed that the moving body 10 cannot move because many persons are gathered or a large load or the like is left in front (broken line portion) of the elevator 41. In this case, the cost map generation unit 203 sets (updates) a cost of the low cost region 42 to the maximum (from YES in step 303 to step 304). The planning unit 134 updates the movement plan on the basis of the updated cost map. As a result, for example, a movement plan of the moving body passing through the high cost region 43 is calculated. The moving body 10 arrives at the destination by passing through the high cost region 43 … For example, in a case where the obstacle is detected in the low cost region 42, it is possible for the moving body to stand by for a sufficient time and determine whether or not to update the cost map. On the other hand, in a case where the obstacle is detected in the high cost region 43, it is determined in a short time whether or not to update the cost map. As a result, it is possible to move the moving body 10 so as to pass through the low cost region 42 as much as possible without causing the moving body 10 to stand by for a long time in the high cost region 43, which is a region outside the image capturing range 12. The determination time of whether or not the moving body 10 updates the cost map due to the obstacle or the like can be said to be a reroute decision time. Furthermore, in a case where the determination time is short, an update speed of the cost map is fast, and in a case where the determination time is long, an update speed of the cost map is slow. Therefore, these update speeds of the cost map can be said to be reroute decision speeds. In other words, the short reroute decision time and the fast reroute decision speed are the same as each other. A specific example of the reroute decision time and the reroute decision speed will be described with reference to FIG. 10B.”). With respect to claim 8, Nakai teaches a control method comprising: a control process of giving a control instruction to at least one or more mobile objects; and a conversion information storing process of converting space information including information relating to a type of object present in a space defined using a first reference system and information relating to a time into a format in association with a unique identifier and storing the format, (See at least Nakai FIGS. 2-4 and Paragraphs 98-105 “First, the moving space is scanned by the moving body 10 on the basis of the map information stored in advance (step 101). Then, path information in which a traveling path and a time stamp are linked to each other is generated. Note that the traveling path corresponds to information of a passing position, and the time stamp corresponds to information of a passing time. This processing corresponds to “SCAN INNER PORTION OF MOVING SPACE IN ADVANCE” illustrated in FIG. 3 … The security camera system 137 generates and outputs image capturing information including a captured moving image in which a time stamp indicating an image capturing time is linked as metadata to each frame. The captured moving image corresponds to the captured image, and the time stamp corresponds to the image capturing time. Image capturing information (captured moving image) of a time zone in which the moving space has been scanned in advance by the moving body 10 is loaded (step 102). The loaded image capturing information is output to the image verification unit 201, and verification of the captured image is performed. In the present embodiment, the captured image in which an image of the moving body 10 is captured and the image capturing time of the captured image are detected from the image capturing information. In other words, a frame in which the image of the moving body 10 is captured and a time stamp of the frame are detected. This processing corresponds to “RECOGNIZE ROBOT ITSELF IN MOVING IMAGE” illustrated in FIG. 3. A method of detecting the moving body 10 from the frame is not limited, and may be any technology. For example, any image recognition technology such as matching processing, edge detection, projection conversion, or the like, using a model image of the moving body 10 may be used. In order to detect the moving body 10, any machine learning algorithm using, for example, a deep neural network (DNN) or the like may be used. For example, by using artificial intelligence (AI) or the like that performs deep learning, it is possible to improve detection accuracy of the moving body 10.” | Paragraphs 118-120 “As illustrated in FIG. 5, the image verification unit 201 performs a check for every frame from the image capturing information of the security camera system 137 projecting the moving body 10 that has moved within the moving space 20 at the time of pre-scanning, and verifies an image and a time in which the moving body (hereinafter, may be referred to a host device) 10 is projected (step 201). The past path information 11 held by the past path holding unit 202 and the time in which the host device is projected in the image capturing range 12, which is verified in step 201, are collated with each other (step 202). The cost map generation unit 203 detects the image capturing range 12 of the security camera system 137 on the basis of the time in which the host device is projected and the corresponding path information 11. In step 203, as illustrated in FIG. 6A, the detected image capturing range 12 is mapped onto a map by the cost map generation unit 203, such that a cost map 30 in which a cost of the surrounding of a path through which the host device has passed is set to be low is generated. The cost map holding unit 204 holds the generated cost map 30. In other words, in the present embodiment, the cost map 30 is calculated so that a cost regarding movement is lower inside the image capturing range 12 than outside the image capturing range 12.”), wherein the control process generates route information relating to a movement route of the mobile object on the basis of the space information acquired in the conversion information storing process and the type information of the mobile object (See at least Nakai FIGS. 6 and 8-10 and Paragraph 120 “In step 203, as illustrated in FIG. 6A, the detected image capturing range 12 is mapped onto a map by the cost map generation unit 203, such that a cost map 30 in which a cost of the surrounding of a path through which the host device has passed is set to be low is generated. The cost map holding unit 204 holds the generated cost map 30. In other words, in the present embodiment, the cost map 30 is calculated so that a cost regarding movement is lower inside the image capturing range 12 than outside the image capturing range 12.”). Nakai fails to explicitly disclose wherein the conversion information storing process is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, the control method further comprising an association process of associating the second reference system with the first reference system. Lasang teaches wherein the conversion information storing process is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, the control method further comprising an association process of associating the second reference system with the first reference system (See at least Lasang FIG. 113 and Paragraphs 829-835 “The following describes a configuration for encoding a geo-referenced point cloud. FIG. 113 is a diagram illustrating a configuration of three-dimensional data encoding device 2910 according to this embodiment. This configuration is applied when coordinates conversion is required. Three-dimensional data encoding device 2910 includes geometric information encoder 2911, geographic information encoder 2912, and bitstream generator 2913. Geometric information encoder 2911 includes octree builder 2914 and entropy encoder 2915. Geographic information encoder 2912 includes coordinates converter 2916, and entropy encoder 2917. Octree builder 2914 builds an octree using geometric information (geometric coordinates) included in an input point cloud. Octree builder 2914 also generates an occupancy code of each node of the octree. The geometric information is, for example, data obtained using LiDAR. Here, entropy encoder 2915 entropy encodes geometric information (e.g., occupancy codes generated by octree builder 2914) to generate a bitstream (encoded data) of the geometric information. Entropy encoder 2915 entropy encodes the geometric information using a coding table different from the one used for entropy encoding geographic information (geographic coordinates). Coordinates converter 2916 converts geographic information (latitude, longitude, and altitude) of a vehicle, which is obtained from a satellite (e.g., a GPS satellite), to world coordinates (e.g., a coordinate system with respect to a reference point in a country) or local coordinates (e.g., a coordinate system using a vehicle location as a reference). In the aforementioned first mode, coordinates converter 2916 converts the geometric coordinates of three-dimensional points to world coordinates using the world coordinates of the vehicle which are generated through the conversion, and then converts the obtained world coordinates of the three-dimensional points to geographic coordinates. Entropy encoder 2917 entropy encodes the geographic coordinates generated by coordinates converter 2916 to generate a bitstream (encoded data) of the geographic information. Here, entropy encoder 2917 entropy encodes the geographic information using a coding table different from the one used for entropy encoding the geometric information. Bitstream generator 2913 generates a bitstream including the bitstream of geometric information and the bitstream of geographic information. Note that when geographic coordinates (latitude, longitude, altitude) are added to an input point cloud, coordinates converter 2916 may convert the geographic coordinates to world coordinates and entropy encoder 2917 may encode the world coordinates obtained through the conversion.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakai to include wherein the conversion information storing process is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, the control method further comprising an association process of associating the second reference system with the first reference system, as taught by Lasang as disclosed above, in order to ensure accurate and efficient object detection (Lasang Paragraph 8 “The present disclosure provides a three-dimensional data encoding method, a three-dimensional data decoding method, a three-dimensional data encoding device, and a three-dimensional data decoding device capable of reducing the amount of processing performed by a decoding device.”). With respect to claim 9, Nakai teaches a non-transitory computer-readable storage medium configured to store a computer program comprising instructions for executing following processes: a control process of giving a control instruction to at least one or more mobile objects; and a conversion information storing process of converting space information including information relating to a type of object present in a space defined using a first reference system and information relating to a time into a format in association with a unique identifier and storing the format, (See at least Nakai FIGS. 2-4 and Paragraphs 98-105 “First, the moving space is scanned by the moving body 10 on the basis of the map information stored in advance (step 101). Then, path information in which a traveling path and a time stamp are linked to each other is generated. Note that the traveling path corresponds to information of a passing position, and the time stamp corresponds to information of a passing time. This processing corresponds to “SCAN INNER PORTION OF MOVING SPACE IN ADVANCE” illustrated in FIG. 3 … The security camera system 137 generates and outputs image capturing information including a captured moving image in which a time stamp indicating an image capturing time is linked as metadata to each frame. The captured moving image corresponds to the captured image, and the time stamp corresponds to the image capturing time. Image capturing information (captured moving image) of a time zone in which the moving space has been scanned in advance by the moving body 10 is loaded (step 102). The loaded image capturing information is output to the image verification unit 201, and verification of the captured image is performed. In the present embodiment, the captured image in which an image of the moving body 10 is captured and the image capturing time of the captured image are detected from the image capturing information. In other words, a frame in which the image of the moving body 10 is captured and a time stamp of the frame are detected. This processing corresponds to “RECOGNIZE ROBOT ITSELF IN MOVING IMAGE” illustrated in FIG. 3. A method of detecting the moving body 10 from the frame is not limited, and may be any technology. For example, any image recognition technology such as matching processing, edge detection, projection conversion, or the like, using a model image of the moving body 10 may be used. In order to detect the moving body 10, any machine learning algorithm using, for example, a deep neural network (DNN) or the like may be used. For example, by using artificial intelligence (AI) or the like that performs deep learning, it is possible to improve detection accuracy of the moving body 10.” | Paragraphs 118-120 “As illustrated in FIG. 5, the image verification unit 201 performs a check for every frame from the image capturing information of the security camera system 137 projecting the moving body 10 that has moved within the moving space 20 at the time of pre-scanning, and verifies an image and a time in which the moving body (hereinafter, may be referred to a host device) 10 is projected (step 201). The past path information 11 held by the past path holding unit 202 and the time in which the host device is projected in the image capturing range 12, which is verified in step 201, are collated with each other (step 202). The cost map generation unit 203 detects the image capturing range 12 of the security camera system 137 on the basis of the time in which the host device is projected and the corresponding path information 11. In step 203, as illustrated in FIG. 6A, the detected image capturing range 12 is mapped onto a map by the cost map generation unit 203, such that a cost map 30 in which a cost of the surrounding of a path through which the host device has passed is set to be low is generated. The cost map holding unit 204 holds the generated cost map 30. In other words, in the present embodiment, the cost map 30 is calculated so that a cost regarding movement is lower inside the image capturing range 12 than outside the image capturing range 12.”), wherein the control process generates route information relating to a movement route of the mobile object on the basis of the space information acquired in the conversion information storing process and the type information of the mobile object (See at least Nakai FIGS. 6 and 8-10 and Paragraph 120 “In step 203, as illustrated in FIG. 6A, the detected image capturing range 12 is mapped onto a map by the cost map generation unit 203, such that a cost map 30 in which a cost of the surrounding of a path through which the host device has passed is set to be low is generated. The cost map holding unit 204 holds the generated cost map 30. In other words, in the present embodiment, the cost map 30 is calculated so that a cost regarding movement is lower inside the image capturing range 12 than outside the image capturing range 12.”). Nakai fails to explicitly disclose wherein the conversion information storing process is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, the control method further comprising an association process of associating the second reference system with the first reference system. Lasang teaches wherein the conversion information storing process is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, the control method further comprising an association process of associating the second reference system with the first reference system (See at least Lasang FIG. 113 and Paragraphs 829-835 “The following describes a configuration for encoding a geo-referenced point cloud. FIG. 113 is a diagram illustrating a configuration of three-dimensional data encoding device 2910 according to this embodiment. This configuration is applied when coordinates conversion is required. Three-dimensional data encoding device 2910 includes geometric information encoder 2911, geographic information encoder 2912, and bitstream generator 2913. Geometric information encoder 2911 includes octree builder 2914 and entropy encoder 2915. Geographic information encoder 2912 includes coordinates converter 2916, and entropy encoder 2917. Octree builder 2914 builds an octree using geometric information (geometric coordinates) included in an input point cloud. Octree builder 2914 also generates an occupancy code of each node of the octree. The geometric information is, for example, data obtained using LiDAR. Here, entropy encoder 2915 entropy encodes geometric information (e.g., occupancy codes generated by octree builder 2914) to generate a bitstream (encoded data) of the geometric information. Entropy encoder 2915 entropy encodes the geometric information using a coding table different from the one used for entropy encoding geographic information (geographic coordinates). Coordinates converter 2916 converts geographic information (latitude, longitude, and altitude) of a vehicle, which is obtained from a satellite (e.g., a GPS satellite), to world coordinates (e.g., a coordinate system with respect to a reference point in a country) or local coordinates (e.g., a coordinate system using a vehicle location as a reference). In the aforementioned first mode, coordinates converter 2916 converts the geometric coordinates of three-dimensional points to world coordinates using the world coordinates of the vehicle which are generated through the conversion, and then converts the obtained world coordinates of the three-dimensional points to geographic coordinates. Entropy encoder 2917 entropy encodes the geographic coordinates generated by coordinates converter 2916 to generate a bitstream (encoded data) of the geographic information. Here, entropy encoder 2917 entropy encodes the geographic information using a coding table different from the one used for entropy encoding the geometric information. Bitstream generator 2913 generates a bitstream including the bitstream of geometric information and the bitstream of geographic information. Note that when geographic coordinates (latitude, longitude, altitude) are added to an input point cloud, coordinates converter 2916 may convert the geographic coordinates to world coordinates and entropy encoder 2917 may encode the world coordinates obtained through the conversion.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Nakai to include wherein the conversion information storing process is able to convert space information including information relating to a type of object present in a space defined using a second reference system different from the first reference system and information relating to a time into a format in association with a unique identifier and store the format, the control method further comprising an association process of associating the second reference system with the first reference system, as taught by Lasang as disclosed above, in order to ensure accurate and efficient object detection (Lasang Paragraph 8 “The present disclosure provides a three-dimensional data encoding method, a three-dimensional data decoding method, a three-dimensional data encoding device, and a three-dimensional data decoding device capable of reducing the amount of processing performed by a decoding device.”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM ABDOALATIF ALSOMAIRY whose telephone number is (571)272-5653. The examiner can normally be reached M-F 7:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IBRAHIM ABDOALATIF ALSOMAIRY/Examiner, Art Unit 3667 /KENNETH J MALKOWSKI/Primary Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Jan 03, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602044
VEHICLE CONTROL SYSTEM, VEHICLE CONTROL METHOD, AND VEHICLE CONTROL PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12578728
AUTONOMOUS SNOW REMOVING MACHINE
2y 5m to grant Granted Mar 17, 2026
Patent 12426758
METHOD AND APPARATUS FOR CONTROLLING ROBOT, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Sep 30, 2025
Patent 12313379
SYSTEM FOR NEUTRALISING A TARGET USING A DRONE AND A MISSILE
2y 5m to grant Granted May 27, 2025
Patent 12265385
SYSTEMS, DEVICES, AND METHODS FOR MILLIMETER WAVE COMMUNICATION FOR UNMANNED AERIAL VEHICLES
2y 5m to grant Granted Apr 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
40%
Grant Probability
49%
With Interview (+8.4%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 82 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month