Prosecution Insights
Last updated: April 19, 2026
Application No. 18/573,382

System, Method and Computer Readable Medium for Determining Characteristics Of Surgical Related Items and Procedure Related Items Present for Use in the Perioperative Period

Non-Final OA §101§102§103§112
Filed
Dec 21, 2023
Examiner
WILLIAMS, REBECCA COLETTE
Art Unit
2677
Tech Center
2600 — Communications
Assignee
UNIVERSITY OF VIRGINIA PATENT FOUNDATION
OA Round
1 (Non-Final)
43%
Grant Probability
Moderate
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
3 granted / 7 resolved
-19.1% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The listing of references in the PCT international search report is not considered to be an information disclosure statement (IDS) complying with 37 CFR 1.98. 37 CFR 1.98(a)(2) requires a legible copy of: (1) each foreign patent; (2) each publication or that portion which caused it to be listed; (3) for each cited pending U.S. application, the application specification including claims, and any drawing of the application, or that portion of the application which caused it to be listed including any claims directed to that portion, unless the cited pending U.S. application is stored in the Image File Wrapper (IFW) system; and (4) all other information, or that portion which caused it to be listed. In addition, each IDS must include a list of all patents, publications, applications, or other information submitted for consideration by the Office (see 37 CFR 1.98(a)(1) and (b)), and MPEP § 609.04(a), subsection I. states, “the list ... must be submitted on a separate paper.” Therefore, the references cited in the international search report have not been considered. Applicant is advised that the date of submission of any item of information in the international search report will be the date of submission of the IDS for purposes of determining compliance with the requirements for the IDS with 37 CFR 1.97, including all timing statement requirements of 37 CFR 1.97(e). See MPEP § 609.05(a). Priority All claims have been examined using the effective filing date of 06/29/2021 from the PRO 63/216,285. Claim Interpretation Claim 13 recites the limitation "The system of claim 1, wherein the machine learning algorithm includes an artificial neural network (ANN) or deep learning algorithm.". However, Claim 1 does not recite a “machine learning algorithm”. Claim 6 recites “The system of claim 1, wherein said trained computer vision model is generated on preliminary image data using a machine learning algorithm.”. Claim 13 will be interpreted as if descending from claim 6. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 13-14, 33-34, 53-54 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 13 recites the limitation "The system of claim 1, wherein the machine learning algorithm includes an artificial neural network (ANN) or deep learning algorithm.". There is insufficient antecedent basis for this limitation in the claim. Claim 1 does not recite a “machine learning algorithm”. Claim 14 is dependent on claim 13. Claim 33 is substantially similar to claim 13, being the method version of claim 13 and the 112(b) rejection, with regards to claim 13, is applied mutatis mutandis. Claim 34 is dependent on claim 33 Claim 53 is substantially similar to claim 13, being directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 13 and the 112(b) rejection, with regards to claim 13, is applied mutatis mutandis. Claim 54 is dependent on claim 53. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1,6,11,15,17, 20-21, 26, 31, 35, 37, 40-41, 46, 51, 55, 57, and 60 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract mental process without significantly more. Claim 1 recites, “a system (generic computer) configured for determining one or more characteristics of surgical related items and/or procedure related items present at preoperative,intraoperative, and/or postoperative settings and/or simulated preoperative, intraoperative, and/or postoperative settings (mental process, a person can view items in different environments and determine characteristics), comprising:one or more computer processors (generic computer); a memory (generic computer)configured to store instructions that are executable by said one or more computer processors (generic computer), wherein said one or more computer processors (generic computer) are configured to execute the instructions to: receive settings image data corresponding with the preoperative, intraoperative, and/or postoperative settings and/or simulated preoperative, intraoperative, and/or postoperative settings (mental process, a person can view an environment); run a trained computer vision model (merely applied) on the received settings image data to identify and label the surgical related items and/or procedure related items in the preoperative, intraoperative, and/or postoperative settings and/or simulated preoperative, intraoperative, and/or postoperative settings (mental process, a person can view an environment and identify and label items);interpret the surgical related items and/or procedure related items through tracking and analyzing said identified and labeled surgical related items and/or procedure related items in the preoperative, intraoperative, and/or postoperative settings and/or simulated preoperative, intraoperative, and/or postoperative settings, to determine said one or more characteristics of the surgical related items and/or procedure related items (mental process, a person can view an environment and identify and label items and determine characteristics (analyze) as items move in the environment (tracking)); and transmit said one or more determined characteristics to a secondary source (mental process, a person can communicate findings to someone else or write them down).” Claim 6 recites, “The system of claim 1 (mental process), wherein said trained computer vision model is generated on preliminary image data using a machine learning algorithm. (well-known and understood extra solution activity)” Claim 11 recites, “The system of claim 1 (mental process), wherein said one or more computer processors are configured to execute the instructions for said tracking and analyzing at one or more of the following: one or more databases; cloud infrastructure; and edge-computing. (merely applied)” Claim 15 recites, “The system of claim 1 (mental process), wherein said determined one or more characteristics includes any combination of one or more of the following:identification of the one or more of the surgical related items and/or procedure related items (mental process, a person can identify an item);usage or non-usage status of the one or more of the surgical related items and/or procedure related items (mental process, a person can identify whether a not an item has been used);opened or unopened status of the one or more of the surgical related items and/or procedure related items (mental process, a person can identify whether a not an item has been opened); moved or non-moved status of the one or more of the surgical related items and/or procedure related items (mental process, a person can identify whether a not an item has been moved); single-use or reusable status of the one or more of the surgical related items and/or procedure related items (mental process, a person can identify whether a not an item is reusable); or association of clinical events, logistical events, or operational events (mental process, a person can determine an association of items to events).” Claim 17 recites, “The system of claim 1 (mental process), wherein said one or more computer processors (generic computer) are further configured to, based on said determined one or more characteristics, execute the instructions to:determine an actionable output to reduce unnecessary waste of the surgical related items for use in the preoperative, intraoperative, and/or postoperative settings and/or the simulated preoperative, intraoperative phase, and/or postoperative settings (mental process, a person can determine what actions to preform based on the characteristics of items in any environment); determine an actionable output to reorganize the surgical related items for use in the preoperative, intraoperative, and/or postoperative settings or the simulated preoperative, intraoperative, and/or postoperative settings (mental process, a person can determine what actions to preform based on the characteristics of items in any environment); determine an actionable output to reduce supply, storage, sterilization and disposal costs associated with use of the surgical related items for use in the preoperative, intraoperative, and/or postoperative settings and/or the simulated preoperative, intraoperative, and/or postoperative settings (mental process, a person can determine what actions to preform based on the characteristics of items in any environment);determine an actionable output to reduce garbage and unnecessary re- sterilization associated with use of the surgical related items for use in the preoperative, intraoperative, and/or postoperative settings and/or the simulated preoperative, intraoperative, and/or postoperative settings (mental process, a person can determine what actions to preform based on the characteristics of items in any environment);determine an actionable output to streamline setup of the surgical related items for use in the preoperative, intraoperative, and/or postoperative settings and/or the simulated preoperative, intraoperative, and/or postoperative settings(mental process, a person can determine what actions to preform based on the characteristics of items in any environment);determine an actionable output to improve efficiency of using the surgical related items for use in the preoperative, intraoperative, and/or postoperative settings and/or the simulated preoperative, intraoperative, and/or postoperative settings (mental process, a person can determine what actions to preform based on the characteristics of items in any environment); determine an actionable output to identify, rank, and/or recognize level of efficiency of surgeons or clinicians (mental process, a person can determine what actions to preform based on the characteristics of items in any environment); and/or determine an actionable output to improve the level of efficiency of using the surgical related items and/or procedure related items that are sterilized (mental process, a person can determine what actions to preform based on the characteristics of items in any environment).” Claim 20 recites, “The system of claim 1 (mental processes), wherein said settings image data comprises three dimensional renderings or representation of information of the surgical related items and/or procedure related items in the preoperative, intraoperative, and/orpostoperative settings and/or simulated preoperative, intraoperative, and/or postoperative settings. (mental process, people see in three dimensions and can view three dimensional renderings)” Claim 21 is substantially similar to claim 1, being the method version of claim 1 and the 101 rejection, with regards to claim 1, is applied mutatis mutandis. Claim 26 is substantially similar to claim 6, being the method version of claim 6 and the 101 rejection, with regards to claim 6, is applied mutatis mutandis. Claim 31 is substantially similar to claim 11, being the method version of claim 11 and the 101 rejection, with regards to claim 11, is applied mutatis mutandis. Claim 35 is substantially similar to claim 15, being the method version of claim 15 and the 101 rejection, with regards to claim 15, is applied mutatis mutandis. Claim 37 is substantially similar to claim 17, being the method version of claim 17 and the 101 rejection, with regards to claim 17, is applied mutatis mutandis. Claim 40 is substantially similar to claim 20, being the method version of claim 20 and the 101 rejection, with regards to claim 20, is applied mutatis mutandis. Claim 41 is substantially similar to claim 1, being directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 1 and the 101 rejection, with regards to claim 1, is applied mutatis mutandis. Claim 46 is substantially similar to claim 6, being directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 6 and the 101 rejection, with regards to claim 6, is applied mutatis mutandis. Claim 51 is substantially similar to claim 11, being directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 11 and the 101 rejection, with regards to claim 11, is applied mutatis mutandis. Claim 55 is substantially similar to claim 15, being directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 15 and the 101 rejection, with regards to claim 15, is applied mutatis mutandis. Claim 57 is substantially similar to claim 17, being directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 17 and the 101 rejection, with regards to claim 17, is applied mutatis mutandis. Claim 60 is substantially similar to claim 20, being directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 20 and the 101 rejection, with regards to claim 20, is applied mutatis mutandis. Independent claims (claims 1, 21, 41) concern systems and methods [step 1], but are directed towards abstract ideas (mental processes) [step 2A Prong 1]. These claims feature additional limitations however these limitations do not integrate the judicial exception into a practical application [step 2A Prong 2]. Claims 1, 21, and 41 mention a processor, memory, and a computer vision model these inclusions are made generally and are merely applied or used as tools to perform the abstract idea. Similarly claims 11, 31, and 51 mention databases, cloud infrastructure, and edge-computing generally, merely applying technology as tools to perform the abstract idea. Additional limitations in claims 15, 17, 20, 35, 37,40, 55, 57, and 60 list additional mental processes. Claims 6, 26, and 46 feature an additional limitation involving generating a trained computer vision model based on image data using a machine learning algorithm. This extra solution limitation is well known in the art (see Khan, Asharul Islam, and Salim Al-Habsi. "Machine learning in computer vision." Procedia Computer Science 167 (2020): 1444-1451.), and only tangibly related to the claimed invention. Furthermore, the above claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception [step 2B] because they do not include improvements to the functioning of a computer, or any other technology or technical field, application or use any particular machines (no mentions of specific components or elements), or applications or use of technology beyond generally linking the abstract idea to a programmed computer environment. Additionally, claims 6, 26, and 46 feature an additional limitation involving generating a trained computer vision model based on image data using a machine learning algorithm. This extra solution limitation is well known, routine, and conventional in the art (see Khan, Asharul Islam, and Salim Al-Habsi. "Machine learning in computer vision." Procedia Computer Science 167 (2020): 1444-1451.), and as stated above, only tangibly related to the claimed invention Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 6, 8-12, 15-21, 26, 28-32, 35-41, 46, 48-52, and 55-60 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Divine (US 11250947 B2). With respect to claim 1, Divine teaches a system configured for determining one or more characteristics of surgical related items and/or procedure related items present at intraoperative, settings (“In implementations in which the one or more objects include equipment or supplies, the environment assessment component 1006 can employ the equipment/supplies assessment component 1008 to determine various types of equipment/supplies information associated with the equipment or supplies.” Page 45 col 43 lines 53-58), comprising:one or more computer processors (“The server device 108 can also include (or is otherwise operatively coupled to) at least one processor 134 that executes the computer-executable components stored in the memory 130. The server device 108 can further include a system bus 136 that can couple the various components of the server device 108 including, but not limited to, the AR assistance module 110, the memory 130 and the processor 134” page 28 col 10 lines 19-26); a memory configured to store instructions that are executable by said one or more computer processors (“The server device 108 can also include (or is otherwise operatively coupled to) at least one processor 134 that executes the computer-executable components stored in the memory 130. The server device 108 can further include a system bus 136 that can couple the various components of the server device 108 including, but not limited to, the AR assistance module 110, the memory 130 and the processor 134” page 28 col 10 lines 19-26, wherein said one or more computer processors are configured to execute the instructions to: receive settings image data (“In some embodiments, the environment recognition component 1002 can determine the area/environment based on received image data (e.g., video and/or still images) captured of the area/environment.” Page 44 col 41 lines 18-21 and “The environment assessment component 1006 can be configured to assess a current area of a healthcare facility that is viewed (or has been selected for view) by a user based on the information determined, generated and/or received by the environment recognition component 1002 identifying the current area and/or information determined, generated, and/or received by the environment characterization component 1004 identifying objects and relative locations of the objects included in the user's current view. The environment assessment component 1006 can be configured to determine a variety of information associated with the area and the one or more objects included in the view of the area.” Page 45 col 43 lines 41-52) corresponding with the intraoperative settings (see figure 15); run a trained computer vision model on the received settings image data (“For example, regarding video and/or still image data captured of a procedural environment and/or of healthcare professional performing a procedure, in one or more embodiments, the procedure characterization component 114 can use various image recognition algorithms to identify objects in the image data … features of the objects (e.g., color, size, brand, relative location, orientation, etc.). ... In various embodiments, the procedure characterization component 114 can generate information identifying objects, people, facial expressions, actions, behaviors, motions, etc., appearing in image data using a variety of models, including but not limited to: extracted features and boosted learning algorithms, bag-of-words models with features such as speeded-up robust features (SURF) and maximally stable extremal regions (MSER), gradient-based and derivative-based matching approaches, Viola-Jones algorithm, template matching, and image segmentation and blob analysis.” Page 33 col 20 lines 42-67) to identify and label the surgical related items and/or procedure related items in the intraoperative settings (See figure 15 and “The equipment/supplies information can include but is not limited to: descriptive information that describes specifications of the equipment or supplies, utilization information regarding utilization of the equipment or supplies, and performance information regarding clinical and/or financial performance of the healthcare facility associated with the equipment or supplies. For example, descriptive information associated with a particular piece of medical equipment or supply can include the type of information that would be likely be associated with the equipment or supply on a sales specification, such as but not limited to: a name or title for the equipment or supply, a description of the intended use, a manufacturer make and model, a dimension or sizing information, cost information, and the like.” Page 45 col 43 lines 66-67 and col 44 lines 1-13); interpret the surgical related items and/or procedure related items through tracking and analyzing said identified and labeled surgical related items and/or procedure related items in the intraoperative setting (“The spatial data can also be used to facilitate identifying objects appearing in the image data, spatial relationships of objects appearing in the image data, and tracking movement of objects appearing in sequence of images captured over time (i.e., video data)” page 29 col 11 lines 14-18 and “The environment assessment component 1006 can be configured to assess a current area of a healthcare facility that is viewed (or has been selected for view) by a user based on the information determined, generated and/or received by the environment recognition component 1002 identifying the current area and/or information determined, generated, and/or received by the environment characterization component 1004 identifying objects and relative locations of the objects included in the user's current view. The environment assessment component 1006 can be configured to determine a variety of information associated with the area and the one or more objects included in the view of the area.” Page 45 col 43 lines 41-52) to determine said one or more characteristics of the surgical related items and/or procedure related items (“The equipment/supplies information can include but is not limited to: descriptive information that describes specifications of the equipment or supplies, utilization information regarding utilization of the equipment or supplies, and performance information regarding clinical and/or financial performance of the healthcare facility associated with the equipment or supplies. For example, descriptive information associated with a particular piece of medical equipment or supply can include the type of information that would be likely be associated with the equipment or supply on a sales specification, such as but not limited to: a name or title for the equipment or supply, a description of the intended use, a manufacturer make and model, a dimension or sizing information, cost information, and the like. Utilization information regarding utilization of a particular piece of medical equipment or supply can include information regarding usage of the medical equipment or supply by the current healthcare organization. For example, such usage information can describe how the healthcare facility uses the medical supply or equipment, information regarding degree of past usage, information regarding frequency of past usage, information regarding expected usage, and the like. In another example, the usage information can relate to current usage state or status of the medical supply or equipment (e.g., used or unused). In yet another example, the usage information can include information that effects usage of a medical supply or equipment, such as information regarding maintenance type issues associated with a piece of medical equipment or supply” page 45 col 43 lines 66-67 and col 44 lines 1-28 and figure 15); and transmit said one or more determined characteristics to a secondary source (“The feedback component 120 can be configured to collate the information retrieved, generated or determined by the environment assessment component 1006 for potential provision as auxiliary information to the user viewing the assessed area as feedback in real-time. For example, the descriptive feedback component 1010 can collect and collate descriptive information generated by the equipment/supplies assessment component 1008 that describes specifications of the equipment or supplies.” Page 46 col 45 lines 28-36). With respect to claim 6, Divine teaches the system of claim 1 wherein said trained computer vision model is generated on preliminary image data using a machine learning algorithm (“For example, regarding video and/or still image data captured of a procedural environment and/or of healthcare professional performing a procedure, in one or more embodiments, the procedure characterization component 114 can use various image recognition algorithms to identify objects in the image data … features of the objects (e.g., color, size, brand, relative location, orientation, etc.). ... In various embodiments, the procedure characterization component 114 can generate information identifying objects, people, facial expressions, actions, behaviors, motions, etc., appearing in image data using a variety of models, including but not limited to: extracted features and boosted learning algorithms, bag-of-words models with features such as speeded-up robust features (SURF) and maximally stable extremal regions (MSER), gradient-based and derivative-based matching approaches, Viola-Jones algorithm, template matching, and image segmentation and blob analysis.” Page 33 col 20 lines 42-67). With respect to claim 8, The system of claim 1, wherein one or more of the following instructions: a) said receiving of said settings image data (see figure 10),b) said running of said trained computer vision model (see figure 10), and c) said interpreting of the surgical related items and/or procedure related items (see figure 10), may be performed on a server (see figure 10). With respect to claim 9, Divine teaches the system of claim 1, wherein said tracking and analyzing comprises object identification for tracking and analyzing (“The environment assessment component 1006 can be configured to assess a current area of a healthcare facility that is viewed (or has been selected for view) by a user based on the information determined, generated and/or received by the environment recognition component 1002 identifying the current area and/or information determined, generated, and/or received by the environment characterization component 1004 identifying objects and relative locations of the objects included in the user's current view. The environment assessment component 1006 can be configured to determine a variety of information associated with the area and the one or more objects included in the view of the area.” Page 45 col 43 lines 41-52); With respect to claim 10, Divine teaches the system of claim 1, wherein said tracking and analyzing comprises specified multiple tracking and analyzing models (see fig 10 element 110, Environment Assistant component, Environment Recognition component, environment Characterization component, Descriptive feedback component, Utilization Feedback component work in conjunction with the Equipment/Supplies Assessment component to track and analyze tools in the environment). With respect to claim 11, Divine teaches the system of claim 1, wherein said one or more computer processors are configured to execute the instructions for said tracking and analyzing at one or more of the following: one or more databases (“According to these embodiments, the equipment/supplies assessment component 1008 can scan various databases and sources to find and retrieve such information associated with a supply or medical equipment that is included in a user's current view. In other implementations, the equipment/supplies assessment component 1008 can run real-time reports on the data included in the various databases and sources to generate the utilization and performance information in real-time.” Page 46 col 45 lines 1-9); cloud infrastructure (“In an aspect, a component can emulate an electronic component via a virtual machine, e.g., within a cloud computing system” page 54 col 61 lines 50-52); and edge-computing (“Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network can comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.” Page 53 col 59 lines 30-38). With respect to claim 12, Divine teaches the system of claim 1, wherein said secondary source includes a display or graphical user interface (see figure 12B AR glasses and “the AR device 1104 can include or be communicatively coupled to the AR assistance module 110 to facilitate providing the user with auxiliary information regarding usage and/or performance of a healthcare system equipment in association with viewing the equipment.” Page 47 col 48 lines 28-33). With respect to claim 15, Divine teaches The system of claim 1, wherein said determined one or more characteristics includes any combination of one or more of the following:identification of the one or more of the surgical related items and/or procedure related items (“The equipment/supplies information can include but is not limited to: descriptive information that describes specifications of the equipment or supplies, utilization information regarding utilization of the equipment or supplies, and performance information regarding clinical and/or financial performance of the healthcare facility associated with the equipment or supplies. For example, descriptive information associated with a particular piece of medical equipment or supply can include the type of information that would be likely be associated with the equipment or supply on a sales specification, such as but not limited to: a name or title for the equipment or supply, a description of the intended use, a manufacturer make and model, a dimension or sizing information, cost information, and the like.” Page 45 col 43 lines 66-67 and col 44 lines 1-13);usage or non-usage status of the one or more of the surgical related items and/or procedure related items (see fig 15 and “For example, with respect to financial performance information associated with equipment and supplies included in an area of a healthcare facility viewed by a user, the equipment/supplies assessment component 1008 can access purchase information, usage information, maintenance information, billing information, etc. associated with usage and performance of the respective supplies and equipment and determine monetary values indicating costs associated with the respective supplies and equipment. For example, the cost information can indicate costs to the healthcare organization attributed to purchasing, using and maintaining the respective supplies and equipment and/or ROI associated with the supplies and/or equipment. The equipment/supplies assessment component 1008 can also access information regarding expected and/or budgeted costs for the equipment and supplies and determine information regarding a degree to which the actual costs associated with certain supplies and equipment are over budget or under budget.” Page 46 col 45 lines 9-27); moved or non-moved status of the one or more of the surgical related items and/or procedure related items (see figure 15, supply cabinet and trash); or association of clinical events (see fig 15 and “For example, with respect to financial performance information associated with equipment and supplies included in an area of a healthcare facility viewed by a user, the equipment/supplies assessment component 1008 can access purchase information, usage information, maintenance information, billing information, etc. associated with usage and performance of the respective supplies and equipment and determine monetary values indicating costs associated with the respective supplies and equipment. For example, the cost information can indicate costs to the healthcare organization attributed to purchasing, using and maintaining the respective supplies and equipment and/or ROI associated with the supplies and/or equipment. The equipment/supplies assessment component 1008 can also access information regarding expected and/or budgeted costs for the equipment and supplies and determine information regarding a degree to which the actual costs associated with certain supplies and equipment are over budget or under budget.” Page 46 col 45 lines 9-27), logistical events (see fig 15 and “For example, with respect to financial performance information associated with equipment and supplies included in an area of a healthcare facility viewed by a user, the equipment/supplies assessment component 1008 can access purchase information, usage information, maintenance information, billing information, etc. associated with usage and performance of the respective supplies and equipment and determine monetary values indicating costs associated with the respective supplies and equipment. For example, the cost information can indicate costs to the healthcare organization attributed to purchasing, using and maintaining the respective supplies and equipment and/or ROI associated with the supplies and/or equipment. The equipment/supplies assessment component 1008 can also access information regarding expected and/or budgeted costs for the equipment and supplies and determine information regarding a degree to which the actual costs associated with certain supplies and equipment are over budget or under budget.” Page 46 col 45 lines 9-27), or operational events (see fig 15 and “For example, with respect to financial performance information associated with equipment and supplies included in an area of a healthcare facility viewed by a user, the equipment/supplies assessment component 1008 can access purchase information, usage information, maintenance information, billing information, etc. associated with usage and performance of the respective supplies and equipment and determine monetary values indicating costs associated with the respective supplies and equipment. For example, the cost information can indicate costs to the healthcare organization attributed to purchasing, using and maintaining the respective supplies and equipment and/or ROI associated with the supplies and/or equipment. The equipment/supplies assessment component 1008 can also access information regarding expected and/or budgeted costs for the equipment and supplies and determine information regarding a degree to which the actual costs associated with certain supplies and equipment are over budget or under budget.” Page 46 col 45 lines 9-27). With respect to claim 16, Divine teaches the system of claim 1, further comprising: one or more cameras configured to capture the image to provide said received image data (“For example, in one implementation, a user can wear or hold the user device 102 as the user moves throughout a healthcare facility and capture image data (e.g., video and/or still images) of the healthcare facility via the camera 104 from the perspective or the user. The user device 102 can further provided the image data to the AR assistance module 110 for processing thereof in real-time.” Page 44 col 41 lines 21-27). With respect to claim 17, Divine teaches the system of claim 1, wherein said one or more computer processors are further configured to, based on said determined one or more characteristics, execute the instructions to: determine an actionable output to improve efficiency of using the surgical related items for use in the intraoperative setting(“In one or more embodiments, the recommendation component 1602 can be configured to analyze the various information/data included in the one or more external information sources 138 (and/or in memory 130), as well as the information determined by the environment assessment component 1006 over time to determine recommendation information regarding how to improve an aspect of the state of performance of the healthcare organization. For example, the recommendation component 1602 can evaluate past performance information regarding the clinical and/or financial performance of the healthcare organization associated with utilization of various equipment, supplies and employees of the healthcare organization. Using one or more machine learning and/or deep learning techniques, the recommendation component 1602 can identify patterns in the data attributed to financial and/or clinical gains and losses and determine recommendation information regarding how to improve financial and/or clinical gain and minimize financial and/or clinical loss. For example, such recommendation information can include the addition or removal of resources, changes to manners in which the healthcare organization uses respective equipment, supplies and employees, and the like. Relevant recommendation information (e.g., relevant to a current area being viewed by a user and a current context of the user) can further be selected by the selection component 1016 and provided to a user as overlay data in an AR or VR experience.” Page 50 col 54 lines 43-67 and page 51 col 55 lines 1-2 and); and/or determine an actionable output to recognize level of efficiency of surgeons or clinicians (“In one or more embodiments, the recommendation component 1602 can be configured to analyze the various information/data included in the one or more external information sources 138 (and/or in memory 130), as well as the information determined by the environment assessment component 1006 over time to determine recommendation information regarding how to improve an aspect of the state of performance of the healthcare organization. For example, the recommendation component 1602 can evaluate past performance information regarding the clinical and/or financial performance of the healthcare organization associated with utilization of various equipment, supplies and employees of the healthcare organization. Using one or more machine learning and/or deep learning techniques, the recommendation component 1602 can identify patterns in the data attributed to financial and/or clinical gains and losses and determine recommendation information regarding how to improve financial and/or clinical gain and minimize financial and/or clinical loss. For example, such recommendation information can include the addition or removal of resources, changes to manners in which the healthcare organization uses respective equipment, supplies and employees, and the like. Relevant recommendation information (e.g., relevant to a current area being viewed by a user and a current context of the user) can further be selected by the selection component 1016 and provided to a user as overlay data in an AR or VR experience.” Page 50 col 54 lines 43-67 and page 51 col 55 lines 1-2, and “For example, with respect to an employee, the recommendation component 1602 can determine a suggested thing to say to a particular employee the user encounters that would improving the clinical and financial performance of the healthcare organization. According to this example, using one or more machine learning or deep learning techniques, the recommendation component 1602 can evaluate past performance information associated with an employee and determine an aspect of the employees' past performance that needs improvement. The recommendation component 1602 can also determine based on the current context of the user and the employee, a relevant suggested remark for the user to say to the employee that would encourage the employee to improve his or her performance. The suggested remark can further be provided to the user as overlay data in AR or VR when the user encounters the employee” page 51 col 55 lines 11-27); With respect to claim 18, Divine teaches the system of claim 1, wherein neither machine readable markings on the surgical related items and/or procedure related items nor communicable coupling between said system and the surgical related items and/or procedure related items are required by said system to provide said one or more determined characteristics (“In some embodiments, the environment recognition component 1002 can determine the area/environment based on received image data (e.g., video and/or still images) captured of the area/environment.” Page 44 col 41 lines 18-21 and “The environment assessment component 1006 can be configured to assess a current area of a healthcare facility that is viewed (or has been selected for view) by a user based on the information determined, generated and/or received by the environment recognition component 1002 identifying the current area and/or information determined, generated, and/or received by the environment characterization component 1004 identifying objects and relative locations of the objects included in the user's current view. The environment assessment component 1006 can be configured to determine a variety of information associated with the area and the one or more objects included in the view of the area.” Page 45 col 43 lines 41-52 and “The equipment/supplies information can include but is not limited to: descriptive information that describes specifications of the equipment or supplies, utilization information regarding utilization of the equipment or supplies, and performance information regarding clinical and/or financial performance of the healthcare facility associated with the equipment or supplies. For example, descriptive information associated with a particular piece of medical equipment or supply can include the type of information that would be likely be associated with the equipment or supply on a sales specification, such as but not limited to: a name or title for the equipment or supply, a description of the intended use, a manufacturer make and model, a dimension or sizing information, cost information, and the like.” Page 45 col 43 lines 66-67 and col 44 lines 1-13 and For example, regarding video and/or still image data captured of a procedural environment and/or of healthcare professional performing a procedure, in one or more embodiments, the procedure characterization component 114 can use various image recognition algorithms to identify objects in the image data … features of the objects (e.g., color, size, brand, relative location, orientation, etc.). ... In various embodiments, the procedure characterization component 114 can generate information identifying objects, people, facial expressions, actions, behaviors, motions, etc., appearing in image data using a variety of models, including but not limited to: extracted features and boosted learning algorithms, bag-of-words models with features such as speeded-up robust features (SURF) and maximally stable extremal regions (MSER), gradient-based and derivative-based matching approaches, Viola-Jones algorithm, template matching, and image segmentation and blob analysis.” Page 33 col 20 lines 42-67). With respect to claim 19, Divine teaches the system of claim 1, wherein said settings image data comprises information from the visible light spectrum (“In some embodiments, the environment recognition component 1002 can determine the area/environment based on received image data (e.g., video and/or still images) captured of the area/environment.” Page 44 col 41 lines 18-21 and “The environment assessment component 1006 can be configured to assess a current area of a healthcare facility that is viewed (or has been selected for view) by a user based on the information determined, generated and/or received by the environment recognition component 1002 identifying the current area and/or information determined, generated, and/or received by the environment characterization component 1004 identifying objects and relative locations of the objects included in the user's current view. The environment assessment component 1006 can be configured to determine a variety of information associated with the area and the one or more objects included in the view of the area.” Page 45 col 43 lines 41-52). With respect to claim 20, Divine teaches the system of claim 1, wherein said settings image data comprises three dimensional renderings or representation of information of the surgical related items and/or procedure related items in the intraoperative setting (“The image data can include live video data being captured of the selected area, still image data captured of the selected area, or model data (e.g., 2D/3D model data) including a regenerated representations or models of the selected area. With this implementation, the particular area and/or view of the environment will be known to the environment recognition component 1002 based on the input selection made by the user. In some one or more embodiments, the environment characterization component 1004 can use image analysis to identify objects and relative locations of the objects in the image data that is rendered to the user.” Page 45 col 43 line 13-24) With respect to claim 21, Divine renders obvious all limitations in consideration of claim 1, because claim 21 is the method version of claim 21. With respect to claim 26, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 6 because claim 26 is the method version of claim 6. With respect to claim 28, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 8 because claim 28 is the method version of claim 8. With respect to claim 29, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 9 because claim 29 is the method version of claim 9. With respect to claim 30, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 10 because claim 30 is the method version of claim 10. With respect to claim 31, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 11 because claim 31 is the method version of claim 11. With respect to claim 32, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 12 because claim 32 is the method version of claim 12. With respect to claim 35, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 15 because claim 35 is the method version of claim 15. With respect to claim 36, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 16 because claim 36 is the method version of claim 16. With respect to claim 37, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 17 because claim 37 is the method version of claim 17. With respect to claim 38, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 18 because claim 38 is the method version of claim 18. With respect to claim 39, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 19 because claim 39 is the method version of claim 19. With respect to claim 40, Divine teaches the method of claim 21 and renders obvious all claim limitations in consideration of claim 20 because claim 40 is the method version of claim 20. With respect to claim 41, Divine renders obvious all claim limitations in consideration of claim 1 because claim 41 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to preform the processes of claim 1. With respect to claim 46, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 6 because claim 46 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 6. With respect to claim 48, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 8 because claim 48 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 8. With respect to claim 49, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 9 because claim 49 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 9. With respect to claim 50, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 10 because claim 50 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 10. With respect to claim 51, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 11 because claim 51 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 11. With respect to claim 52, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 12 because claim 52 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 12. With respect to claim 55, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 15 because claim 55 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 15. With respect to claim 56, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 16 because claim 56 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 16. With respect to claim 57, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 17 because claim 57 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 17. With respect to claim 58, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 18 because claim 58 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 18. With respect to claim 59, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 19 because claim 59 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 19. With respect to claim 60, Divine teaches the non-transitory computer readable medium of claim 41 and render obvious all claim limitations in consideration of claim 20 because claim 60 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 20. Claim Rejections - 35 USC § 103 Claims 2-5, 7, 13-14, 22-25, 27, 33-34, 42-45, 47, and 53-54 are rejected under 35 U.S.C. 103 as being unpatentable over Divine as applied to claim 1 above, and further in view of Buch (WO 2020023740 A1). With respect to claim 2, Divine teaches the system of claim 1, but does not teach any further limitations. Buch teaches wherein one or more computer processors (“For example, the subject matter described herein can be implemented in software executed by a processor” page 2 lines 23-25) are configured to execute the instructions to: retrain said trained computer vision model using said received settings image data from the intraoperative settings (see figure 2 element 200). Buch is analogous art in the same field of endeavor as the claimed invention. Buch is directed towards surgical tool identification using neural networks (see figure 2). A person of ordinary skill would have found it obvious before the effective filing date of the claimed invention to combine the teachings of Divine and Buch by utilizing the machine learning model of Buch inside the surgical tool identifying process of Divine, with the expectation that doing so would lead to improved “generalizability of the output to many surgical scenarios. Also, general labeling (i.e. “bone”, “muscle”,“blood vessel”, etc.) will help to keep the scope of the network broad and capable of being used in many different types of surgery and even novel future applications.” (page 6 line 18-23) With respect to claim 3, Divine and Buch teach the system of claim 2. Buch further teaches wherein a trained computer vision model is generated on preliminary image data using a machine learning algorithm (see figure 4 and “The boxes labeled Backbone and Conv represent the convolutional neural network being trained. The convolutional neural network backbone may be modified from the Detectron mask recurrent convolutional neural network” page 12 lines 19-22). With respect to claim 4, Divine and Buch teach the system of claim 3. Buch further teaches wherein: said training of said computer vision model, may be performed locally (“local DNN CPU” page 4 line 32-33). With respect to claim 5, Divine and Buch teach the system of claim 2. Buch further teach wherein: said retraining of said computer vision model, may be performed locally (“local DNN CPU” page 4 line 32-33) With respect to claim 7, Divine teaches the system of claim 6, but does not explicitly teach further limitations. Bush teaches wherein: said training of said computer vision model, may be performed locally (“local DNN CPU” page 4 line 32-33). Buch is analogous art in the same field of endeavor as the claimed invention. Buch is directed towards surgical tool identification using neural networks (see figure 2). A person of ordinary skill would have found it obvious before the effective filing date of the claimed invention to combine the teachings of Divine and Buch by utilizing the machine learning model of Buch inside the surgical tool identifying process of Divine, with the expectation that doing so would lead to improved “generalizability of the output to many surgical scenarios. Also, general labeling (i.e. “bone”, “muscle”,“blood vessel”, etc.) will help to keep the scope of the network broad and capable of being used in many different types of surgery and even novel future applications.” (page 6 line 18-23) With respect to claim 13, Divine teaches the system of claim 1, but does not teach the rest of the limitations. Buch teaches wherein a machine learning algorithm includes an artificial neural network (ANN) or deep learning algorithm (see figure 4 and “The boxes labeled Backbone and Conv represent the convolutional neural network being trained. The convolutional neural network backbone may be modified from the Detectron mask recurrent convolutional neural network” page 12 lines 19-22). Buch is analogous art in the same field of endeavor as the claimed invention. Buch is directed towards surgical tool identification using neural networks (see figure 2). A person of ordinary skill would have found it obvious before the effective filing date of the claimed invention to combine the teachings of Divine and Buch by utilizing the machine learning model of Buch inside the surgical tool identifying process of Divine, with the expectation that doing so would lead to improved “generalizability of the output to many surgical scenarios. Also, general labeling (i.e. “bone”, “muscle”,“blood vessel”, etc.) will help to keep the scope of the network broad and capable of being used in many different types of surgery and even novel future applications.” (page 6 line 18-23) With respect to claim 14, Divine and Buch teach the system of claim 13. Buch further teaches wherein said artificial neural network (ANN) includes: convolutional neural network (CNN); and/or recurrent neural networks (RNN) (see figure 4 and “The boxes labeled Backbone and Conv represent the convolutional neural network being trained. The convolutional neural network backbone may be modified from the Detectron mask recurrent convolutional neural network” page 12 lines 19-22). With respect to claim 22, Divine teaches the method of claim 21, and in view of Buch renders obvious all limitations in consideration of claim 2, because claim 22 is the method version of claim 2. With respect to claim 23, Divine and Buch teach the method of claim 22 and render obvious all limitations in consideration of claim 3, because claim 23 is the method version of claim 3. With respect to claim 24, Divine and Buch teach the method of claim 23 and render obvious all limitations in consideration of claim 4, because claim 24 is the method version of claim 4. With respect to claim 25, Divine and Buch teach the method of claim 22 and render obvious all limitations in consideration of claim 5, because claim 25 is the method version of claim 5. With respect to claim 27, Divine teaches the method of claim 26 and in view of Bush renders obvious all claim limitations in consideration of claim 7 because claim 27 is the method version of claim 7. With respect to claim 33, Divine teaches the method of claim 21 and in view of Buch renders obvious all claim limitations in consideration of claim 13 because claim 33 is the method version of claim 13. With respect to claim 34, Divine and Buch teach the method of claim 33 and renders obvious all claim limitations in consideration of claim 14 because claim 34 is the method version of claim 14. With respect to claim 42, Divine teaches the non-transitory computer readable medium of claim 41 and in view of Buch, renders obvious all claim limitations in consideration of claim 2 because claim 41 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 2. With respect to claim 43, Divine and Buch teach the non-transitory computer readable medium of claim 42 and render obvious all claim limitations in consideration of claim 3 because claim 43 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 3. With respect to claim 44, Divine and Buch teach the non-transitory computer readable medium of claim 43 and render obvious all claim limitations in consideration of claim 4 because claim 44 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 4. With respect to claim 45, Divine and Buch teach the non-transitory computer readable medium of claim 42 and render obvious all claim limitations in consideration of claim 5 because claim 45 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 5. With respect to claim 47, Divine teaches the non-transitory computer readable medium of claim 46 and in view of Buch, renders obvious all claim limitations in consideration of claim 7 because claim 47 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 7. With respect to claim 53, Divine teaches the non-transitory computer readable medium of claim 41 and in view of Buch renders obvious all claim limitations in consideration of claim 13 because claim 53 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 13. With respect to claim 54, Divine and Buch teach the non-transitory computer readable medium of claim 53 and render obvious all claim limitations in consideration of claim 14 because claim 54 is directed towards a non-transitory computer readable medium storing instructions that when executed by one or more processors, cause the one or more processors to perform the processes of claim 14. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA C WILLIAMS whose telephone number is (571)272-7074. The examiner can normally be reached M-F 7:30am - 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew W Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /REBECCA COLETTE WILLIAMS/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Dec 21, 2023
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
43%
Grant Probability
99%
With Interview (+66.7%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month