Prosecution Insights
Last updated: April 19, 2026
Application No. 18/439,310

SYSTEMS AND METHODS FOR GENERATING OF 3D INFORMATION ON A USER DISPLAY FROM PROCESSING OF SENSOR DATA FOR OBJECTS, COMPONENTS OR FEATURES OF INTEREST IN A SCENE AND USER NAVIGATION THEREON

Final Rejection §103§DP
Filed
Feb 12, 2024
Examiner
HAUSMANN, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Bentley Systems Capital LLC
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
658 granted / 863 resolved
+14.2% vs TC avg
Strong +22% interview lift
Without
With
+21.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
14.6%
-25.4% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 863 resolved cases

Office Action

§103 §DP
DETAILED ACTION Response to Amendment Claims 1-20 are pending. Claim 1 is amended. Claims 2-20 are new. Response to Arguments Applicant's arguments filed 10 February, 2026 have been fully considered but they are not persuasive. Applicant argues cited references do not disclose a plurality of viewports. Chen indicates b) processing, by the computer, the first sensor data collection to generate a user display including at least a plurality of viewports wherein: i) each of the plurality of viewports is configured to display first object information associated with the first object; and ii) the displayed first object information is derived from the synchronized sensor data with the following: “At 310, images are received that depict a person from multiple viewpoints. The images can be received from multiple cameras configured to capture a scene comprising the person from the multiple viewpoints. In at least one embodiment, the cameras comprise video cameras and the captured images comprise video frames depicting the person. The images can be received via one or more wired and/or wireless connections. The images may be received as part of a batch transmission. For example, images from multiple cameras can be received in a batch transmission that is associated with a given moment in time. In embodiments where the images are received from video cameras, multiple such batches can be received that are associated with different frame captures from the various video cameras”, [0047], “In example environment 900, the cloud 910 provides services for connected devices 930, 940, 950 with a variety of screen capabilities. Connected device 930 represents a device with a computer screen 935 (e.g., a mid-size screen). For example, connected device 930 could be a personal computer such as desktop computer, laptop, notebook, netbook, or the like. Connected device 940 represents a device with a mobile device screen 945 (e.g., a small size screen). For example, connected device 940 could be a mobile phone, smart phone, personal digital assistant, tablet computer, and the like. Connected device 950 represents a device with a large screen 955. For example, connected device 950 could be a television screen (e.g., a smart television) or another device connected to a television (e.g., a set-top box or gaming console) or the like”, [0099]) as paragraph 47 implies a plurality of viewports exist as multiple different viewpoints are displayed, and paragraph 99 indicates at least three different types of displays as 930, 940, and 950 each have a different type of screen and therefore this also describes a plurality of viewports. Hannuksela et al. also teaches a plurality of viewports: The viewport information may comprise one or both of the first viewport parameters of a prevailing viewport, and second viewport parameters of one or more expected viewports. The display device 920, 1420 may for example extrapolate head movement and/or acceleration/deceleration to estimate one or more expected viewports ([0166]) In a method the viewport information may be received, and the first spatial regions may be selected based on the viewport information. The viewport information may comprise one or both of first viewport parameters of prevailing viewport, and second viewport parameters of one or more expected viewports, and wherein the viewport parameters characterize a viewport and comprises one or more of a spatial location of a reference point, an orientation, extents, and a shape ([0182]). Applicant’s arguments with respect to the double patenting rejection of claim 1 along with the terminal disclaimer have been fully considered and are persuasive. The double patenting rejection of claim 1 has been withdrawn. Claims 19-20 are not examined, as these are restricted by original presentation, see below. It is noted a version of claims 7 and 15 could possibly be a better direction for amendment. Examiner recommends removing language “photosensor” and adding how the BIM and LIDAR viewports are synchronized. Terminal Disclaimer The terminal disclaimer filed on 10 February, 2026 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of US patent 11216663 has been reviewed and is accepted. The terminal disclaimer has been recorded. Election/Restrictions Newly submitted claims 19 and 20 are directed to an invention that is independent or distinct from the invention originally claimed for the following reasons: Claims 1 and new claim 19 would result in a restriction requirement and search burden. There are two distinct species. Species 1 comprises claims 1-18. Species 2 comprises claims 19-20. Claim 1 indicates:A method of generating a user display associated with at least one object in a scene or location comprising:a) providing, by a computer, a first sensor data collection associated with a first object in a scene or location, wherein:i) the first sensor data collection is generated from one or more sensor data acquisition events; and ii) the first sensor data collection comprises synchronized sensor data including one or more sensor data types, wherein the first sensor data collection is generated by:(1) transforming all sensor data in the first sensor data collection into a single coordinate system; or (2) calculating one or more transformations for sensor data in the first sensor data collection, wherein the one or more transformations enable representation of the sensor data in the first sensor data collection in the single coordinate system;b) processing, by the computer, the first sensor data collection to generate a user display including at least a plurality of viewports wherein:i) each of the plurality of viewports is configured to display first object information associated with the first object; and ii) the displayed first object information is derived from the synchronized sensor data;c) defining, by the computer, a viewport of the plurality of viewports on the user display as a first object base viewport;d) identifying, by the computer, each of one or more remaining viewports of theplurality of viewports on the user display as a first object dependent viewportcomprising firstobject information; ande) displaying, by the computer, the first object base viewport and each of the one or more firstobject dependent viewportsof the plurality of viewports on the userdisplay, wherein the displayed first object information in the first objectdependent viewports substantially corresponds to a real-time position and orientation of a scene camera in the first object base viewport, thereby providing a concurrent display of synchronized first object information in each of the viewports. Claim 19 indicates A computing device comprising:a processor; anda memory configured to store software for execution on the processor, thesoftware when executed being operable to:generate a first sensor data collection from one or more sensor dataacquisition events that conducted at a first time and a second sensor data collection from one or more additional sensor data acquisition events conducted at a different time than the first time, wherein each sensor data collection includessynchronized sensor data of one or more sensor data types that are transformedinto a single coordinate system or associated with one or more transformationsthat enable representation in the single coordinate system,process the first sensor data collection to generate a first object baseviewport showing object information associated with the first object at the first time,process the second sensor data collection to generate at least one or morefirst object dependent viewports showing the object information associated with the first object at the different time, anddisplay the first object base viewport and each of the one or more firstobject dependent viewports on the user display, wherein the object information in the first object dependent viewport substantially corresponds to a real-time position and orientation of a scene camera in the first object base viewport, andthe object information in the first object dependent viewport represents the firstobject at the different time than the first time of the first object base viewport to provide a display of the object information associated with multiple times. Claim 1 has the separate functionality of a plurality of viewports, providing a concurrent display of synchronized first object information in each of the viewports (classified generally in G05D 1/0016). Claim 19 has the separate functionality of one or more additional sensor data acquisition events conducted at a different time than the first time, the first object dependent viewport represents the first object at the different time than the first time of the first object base viewport to provide a display of the object information associated with multiple times (classified generally in G05D 1/0246). There is a serious search and/or examination burden for the patentably distinct species as set forth above because at least the following reason(s) apply: The species or groupings of patentably indistinct species require a different field of search (e.g. searching different main-groups / sub-groups or electronic resources, or employing different search strategies or search queries). Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claims 19-20 are withdrawn from consideration as being directed to a non-elected invention. See 37 CFR 1.142(b) and MPEP § 821.03. To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention. Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 3, 8, 10, 11, 12, 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20190139297 A1) in view of Hannuksela et al. (US 20180167613 A1) in view of Sewell (US 20120071752 A1). Regarding claims 1 and 11, Chen discloses a method of generating a user display associated with at least one object in a scene or location comprising and non-transitory computer-readable medium having instruction stored thereon, the instructions when executed on one or more processors of one or more computing devices being operable to: a) providing, by a computer, a first sensor data collection associated with a first object in a scene or location (target objects captured from different viewpoints, abstract, received images can depict multiple people in a scene, [0025]) wherein: i) the first sensor data collection is generated from one or more sensor data acquisition events (capture image of a scene containing the target object, [0029], images from multiple cameras can be received in a batch transmission that is associated with a given moment in time, [0047]); and ii) the first sensor data collection comprises synchronized sensor data including one or more sensor data types (the camera devices 120 comprise cameras configured to synchronize image capture, [0030]), wherein the first sensor data collection is generated by: (1) transforming all sensor data in the first sensor data collection into a single coordinate system; or (2) calculating one or more transformations for sensor data in the first sensor data collection, wherein the one or more transformations enable representation of the sensor data in the first sensor data collection in a single coordinate system (A pixel coordinate position of a keypoint in an image can be used to look up a depth value for the keypoint in a depth map associated with the image, [0060], The 2D coordinates of the correlated keypoints in their respective images, and the relative positions of the associated cameras that captured the images, can be used to triangulate a 3D position of a body part associated with the correlated keypoints, [0062]); b) processing, by the computer, the first sensor data collection to generate a user display including at least a plurality of viewports wherein: i) each of the plurality of viewports is configured to display first object information associated with the first object; and ii) the displayed first object information is derived from the synchronized sensor data (“At 310, images are received that depict a person from multiple viewpoints. The images can be received from multiple cameras configured to capture a scene comprising the person from the multiple viewpoints. In at least one embodiment, the cameras comprise video cameras and the captured images comprise video frames depicting the person. The images can be received via one or more wired and/or wireless connections. The images may be received as part of a batch transmission. For example, images from multiple cameras can be received in a batch transmission that is associated with a given moment in time. In embodiments where the images are received from video cameras, multiple such batches can be received that are associated with different frame captures from the various video cameras”, [0047], “In example environment 900, the cloud 910 provides services for connected devices 930, 940, 950 with a variety of screen capabilities. Connected device 930 represents a device with a computer screen 935 (e.g., a mid-size screen). For example, connected device 930 could be a personal computer such as desktop computer, laptop, notebook, netbook, or the like. Connected device 940 represents a device with a mobile device screen 945 (e.g., a small size screen). For example, connected device 940 could be a mobile phone, smart phone, personal digital assistant, tablet computer, and the like. Connected device 950 represents a device with a large screen 955. For example, connected device 950 could be a television screen (e.g., a smart television) or another device connected to a television (e.g., a set-top box or gaming console) or the like”, [0099]) [paragraph 47 implies a plurality of viewports exist as multiple different viewpoints are displayed, and paragraph 99 indicates at least three different types of displays as 930, 940, and 950 each have a different type of screen and therefore this also describes a plurality of viewports]; c) defining, by the computer, a viewport of the plurality of viewports on the user display as a first object base viewport (“The images 126 and 128 can be captured by multiple camera devices 120 positioned to capture image of a scene containing the target object 130 from different viewpoints. Although two camera device 120 are depicted in FIG. 1, more or fewer camera devices can be used in some embodiments. In some cases, it may be possible to use a single camera device that is moved around a scene to capture images of the target object 130 from different perspectives”, [0029], The images can be received from multiple cameras configured to capture a scene comprising the person from the multiple viewpoints, [0047], “In some cases, a view port with which the rendering is performed can be moved in a 3D space containing the 3D representation of the person. In such cases, multiple renderings of the 3D representation can be performed as the viewport position is updated”, [0081]); d) identifying, by the computer, each of one or more remaining viewports of the plurality of viewports on the user display as a first object dependent viewport comprising first object information (“The images 126 and 128 can be captured by multiple camera devices 120 positioned to capture image of a scene containing the target object 130 from different viewpoints. Although two camera device 120 are depicted in FIG. 1, more or fewer camera devices can be used in some embodiments. In some cases, it may be possible to use a single camera device that is moved around a scene to capture images of the target object 130 from different perspectives”, [0029], The images can be received from multiple cameras configured to capture a scene comprising the person from the multiple viewpoints, [0047], “In some cases, a view port with which the rendering is performed can be moved in a 3D space containing the 3D representation of the person. In such cases, multiple renderings of the 3D representation can be performed as the viewport position is updated”, [0081]); and e) displaying, by the computer, the first object base viewport and each of the one or more first object dependent viewports of the plurality of viewports on the user display, wherein the displayed first object information in the first object dependent viewports substantially corresponds to a real-time position and orientation of a scene camera in the first object base viewport, thereby providing a concurrent display of synchronized first object information in each of the viewports (“The technologies described herein can be used to implement a holoportation system. Holoportation is a type of 3D capture technology that allows high quality 3D models of people and/or environments to be constructed and transmitted to a viewer in real -time.”, [0027]). Chen does not make explicit (1) transforming all sensor data in the first sensor data collection into a single coordinate system; or (2) calculating one or more transformations for sensor data in the first sensor data collection, wherein the one or more transformations enable representation of the sensor data in the first sensor data collection in a single coordinate system, or c) defining, by the computer, a viewport on the user display as a first object base viewport. Hannuksela et al. teach generating a user display associated with at least one object in a scene or location comprising: a) providing, by a computer, a first sensor data collection associated with a first object in a scene or location wherein: i) the first sensor data collection is generated from one or more sensor data acquisition events and ii) the first sensor data collection comprises synchronized sensor data including one or more sensor data types (The cameras/lenses may cover all directions around the center point of the camera set or the camera device. The images of the same time instance are stitched, projected, and mapped onto a packed VR frame, [0103]) wherein the first sensor data collection is generated by: (1) transforming all sensor data in the first sensor data collection into a single coordinate system; or (2) calculating one or more transformations for sensor data in the first sensor data collection, wherein the one or more transformations enable representation of the sensor data in the first sensor data collection in a single coordinate system (orientation of a viewport represented by angular coordinates of a coordinate system, [0101], Viewport parameters may comprise one or more of spatial location of a reference point (such as a center point), an orientation, extents, and a shape of the viewport. The spatial location may for example be indicated with spherical coordinates, such as yaw and pitch, in a spherical coordinate system. The orientation may for example be indicated with the roll parameters in a spherical coordinate system, where the roll accompanies yaw and pitch of a spatial location, [0169]) b) processing, by the computer, the first sensor data collection to generate a user display including at least a plurality of viewports wherein: i) each of the plurality of viewports is configured to display first object information associated with the first object; and ii) the displayed first object information is derived from the synchronized sensor data (The 360 degrees space can be assumed to be divided into a discrete set of viewports, each separated by a given distance (e.g., expressed in degrees), so that the omnidirectional space can be imagined as a map of overlapping viewports, and the primary viewport is switched discretely as the user changes his/her orientation while watching content with a HMD, [0115], “The viewport information may comprise one or both of the first viewport parameters of a prevailing viewport, and second viewport parameters of one or more expected viewports. The display device 920, 1420 may for example extrapolate head movement and/or acceleration/deceleration to estimate one or more expected viewports”, [0166], In a method the viewport information may be received, and the first spatial regions may be selected based on the viewport information. The viewport information may comprise one or both of first viewport parameters of prevailing viewport, and second viewport parameters of one or more expected viewports, and wherein the viewport parameters characterize a viewport and comprises one or more of a spatial location of a reference point, an orientation, extents, and a shape. [0182]) [language such as one or more expected viewports indicative of a plurality of viewports] c) defining, by the computer, a viewport of the plurality of viewports on the user display as a first object base viewport (For the sake of clarity, a part of the 360 degrees space viewed by a user at any given point of time is referred as a "primary viewport", [0110]-[0112]); d) identifying, by the computer, each of one or more remaining viewports of the plurality of viewports on the user display as a first object dependent viewport comprising first object information (other views, [0110]-[0112], overlapping viewports, [0115]) and e) displaying, by the computer, the first object base viewport and each of the one or more first object dependent viewports of the plurality of viewports on the user display, wherein the displayed first object information in the first object dependent viewports substantially corresponds to a real-time position and orientation of a scene camera in the first object base viewport, thereby providing a concurrent display of synchronized first object information in each of the viewports (In such streaming a subset of 360-degree video content covering the primary viewport (i.e., the current view orientation) is transmitted at the best quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution, [0111]). Chen and Hannuksela et al. are in the same art of viewports (Chen, [0081]; Hannuksela et al., [0110]). The combination of Hannuksela et al. with Chen enables selection of a primary viewport. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the viewport selection of Hannuksela et al. with the invention of Chen as this was known at the time of filing, the combination would have predictable results, and as Hannuksela et al. indicate this will reduce data transmission rates needed for virtual reality content ([0004]), where one method to reduce the streaming bitrate of VR video is viewport adaptive streaming ([0111]) which will make the data transmission of Chen more efficient. Chen and Hannuksela et al. do not explicitly disclose (1) transforming all sensor data in the first sensor data collection into a single coordinate system; or (2) calculating one or more transformations for sensor data in the first sensor data collection, wherein the one or more transformations enable representation of the sensor data in the first sensor data collection in a single coordinate system. Sewell et al. teach generating a user display associated with at least one object in a scene or location comprising: a) providing, by a computer, a first sensor data collection associated with a first object in a scene or location, wherein: i) the first sensor data collection is generated from one or more sensor data acquisition events (“If the operator of the automobile is trying to park the car adjacent another car parked directly in front of him, it might be preferable to also have a view from a camera positioned, for example, upon the sidewalk aimed perpendicularly through the space between the two cars”, [0703], the orientation of the distal tip of the catheter may be measured using a 6-axis position sensor system, computer graphics view of the catheter is rotated until the master input and the computer graphics view of the catheter distal tip motion are coordinated and aligned with the camera view of the graphics scene, [0708]); and ii) the first sensor data collection comprises synchronized sensor data including one or more sensor data types, wherein the first sensor data collection is generated by: (1) transforming all sensor data in the first sensor data collection into a single coordinate system; or (2) calculating one or more transformations for sensor data in the first sensor data collection, wherein the one or more transformations enable representation of the sensor data in the first sensor data collection in a single coordinate system (To facilitate instinctive operation of the system, it is preferable to have the master input device coordinate system at least approximately synchronized with the coordinate system of at least one of the two views, [0344], synchronization of coordinate systems, [0347], To facilitate instinctive operation of the system, it is preferable to have the master input device coordinate system at least approximately synchronized with the coordinate system of at least one of the two views, [0702], 3-axis coordinate frame transformation, [0708]-[0710]); b) processing, by the computer, the first sensor data collection to generate a user display including at least a plurality of viewports wherein: i) each of the plurality of viewports is configured to display first object information associated with the first object; and ii) the displayed first object information is derived from the synchronized sensor data (computer graphics view of the catheter is rotated until the master input and the computer graphics view of the catheter distal tip motion are coordinated and aligned with the camera view of the graphics scene, [0708]); c) defining, by the computer, a viewport of the plurality of viewports on the user display as a first object base viewport (selecting which view is a primary view, [0346], Should the operator decide to toggle the system to use the rightmost view 144 as the primary navigation view, [0347], primary view, [0702], By selecting which view is a primary view, the user can automatically toggle a master input device 12 coordinate system to synchronize with the selected primary view, [0705]); d) identifying, by the computer, each of one or more remaining viewports of the plurality of viewports on the user display as a first object dependent viewport comprising first object information (secondary view, to facilitate instinctive operation of the system, it is preferable to have the master input device coordinate system at least approximately synchronized with the coordinate system of at least one of the two views, [0702]); and e) displaying, by the computer, the first object base viewport and each of the one or more first object dependent viewports of the plurality of viewports on the user display, wherein the displayed first object information in the first object dependent viewports substantially corresponds to a real-time position and orientation of a scene camera in the first object base viewport, thereby providing a concurrent display of synchronized first object information in each of the viewports (monitoring the position of objects, in a reference coordinate system, real-time or near real-time positional information, such as X-Y-Z coordinates in a Cartesian coordinate system, but also orientation information relative to a given coordinate axis or system, [0340], “In one embodiment, the instrument localization software is a proprietary module packaged with an off-the-shelf or custom instrument position tracking system, such as those available from Ascension Technology Corporation, Biosense Webster, Inc., Endocardial Solutions, Inc., Boston Scientific (EP Technologies), Medtronic, Inc., and others. Such systems may be capable of providing not only real-time or near real-time positional information, such as X-Y-Z coordinates in a Cartesian coordinate system, but also orientation information relative to a given coordinate axis or system”, [0699]). Chen and Hannuksela et al. and Sewell et al. are in the same art of viewports (Chen, [0081]; Hannuksela et al., [0110]; Sewell et al., [0702]). The combination of Sewell et al. with Chen and Hannuksela et al. enables coordinate system synchronization. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the synchronization of Sewell et al. with the invention of Chen and Hannuksela et al. as this was known at the time of filing, the combination would have predictable results, and as Sewell et al. indicate it is preferable to provide the operator with one or more secondary views which may be helpful in navigating through challenging tissue structure pathways and geometries ([0344]), which will better adapt the invention of Chen and Hannuksela et al. for biomedical applications. Regarding claims 2 and 12, Chen, Hannuksela et al., and Sewell et al. disclose the method and non-transitory CRM of claims 1 and 11. Chen, Hannuksela et al., and Sewell et al. further indicate moving the scene camera in and around the first object base viewport to a newposition and orientation; and updating, by the computer, the one or more first object dependent viewports on the user display substantially in real time to show the synchronized first object information based on the new position and orientation of the scene camera (Chen, “The technologies described herein can be used to implement a holoportation system. Holoportation is a type of 3D capture technology that allows high quality 3D models of people and/or environments to be constructed and transmitted to a viewer in real -time.”, [0027], “The images 126 and 128 can be captured by multiple camera devices 120 positioned to capture image of a scene containing the target object 130 from different viewpoints. Although two camera device 120 are depicted in FIG. 1, more or fewer camera devices can be used in some embodiments. In some cases, it may be possible to use a single camera device that is moved around a scene to capture images of the target object 130 from different perspectives”, [0029], The images can be received from multiple cameras configured to capture a scene comprising the person from the multiple viewpoints, [0047], “In some cases, a view port with which the rendering is performed can be moved in a 3D space containing the 3D representation of the person. In such cases, multiple renderings of the 3D representation can be performed as the viewport position is updated”, [0081]; Hannuksela et al., primary + other viewports, the primary viewport is switched discretely as the user changes his/her orientation, [0110]-[0115]; Sewell, “In one or more of the embodiments, the user interface of the robotic system may be configured to allow a user to register (or align) a real image of a catheter (e.g., a fluoroscopic image) with an image of a computer model of the catheter. This results in the real image catheter being in a same orientation as that of the model image, thereby allowing a user to instinctively drive the catheter (e.g., so that a command to move the catheter model to the right will result in the catheter moving to the right in the reference frame of the real image)”, [0709]). Regarding claim 3, Chen, Hannuksela et al., and Sewell et al. disclose the method of claim 2. Chen, Hannuksela et al., and Sewell et al. further indicate the updating further comprises: identifying, by the computer, one or more new first object dependent viewports that substantially correspond with the new position and orientation of the scene camera; and displaying, by the computer, the one or more first object dependent viewports on the user display (Chen, In some cases, it may be possible to use a single camera device that is moved around a scene to capture images of the target object 130 from different perspectives, [0029]; Hannuksela et al., selecting the second spatial region based on the received viewport information, [0009], “More than two parallel views may be needed for applications which enable viewpoint switching or for autostereoscopic displays which may present a large number of views simultaneously and let the viewers to observe the content from different viewpoints,” [0097], the displayed spatial subset of the VR video content may be selected based on the orientation of the device used for the viewing, or the device may enable content panning, e.g., by providing basic user interface (UI) controls for the user, [0102], “The 360 degrees space can be assumed to be divided into a discrete set of viewports, each separated by a given distance (e.g., expressed in degrees), so that the omnidirectional space can be imagined as a map of overlapping viewports, and the primary viewport is switched discretely as the user changes his/her orientation while watching content with a HMD”, [0115]). Regarding claims 8 and 16, Chen, Hannuksela et al., and Sewell et al. disclose the method and CRM of claims 1 and 11. Chen, Hannuksela et al., and Sewell et al. furthesr indicate one or more of the plurality of viewports are synthetic images generated based on one or a combination of sensor data types captured for various positions and orientations to show the first object from the position and orientation of the scene camera (Chen, “The images 126 and 128 can be captured by multiple camera devices 120 positioned to capture image of a scene containing the target object 130 from different viewpoints. Although two camera device 120 are depicted in FIG. 1, more or fewer camera devices can be used in some embodiments. In some cases, it may be possible to use a single camera device that is moved around a scene to capture images of the target object 130 from different perspectives”, [0029], capture an RGB (Red-Green-Blue) image of the scene comprising the target object 130 and a depth camera configured to capture depth information about the scene. In such an embodiment, the captured images can comprise RGB pixels and associated depth values. For example, a captured image can comprise a 2D RGB image and an associated depth map, [0030], The images can be received from multiple cameras configured to capture a scene comprising the person from the multiple viewpoints, [0047], RGBD 4-vectors for points in the scene, [0072], “In some cases, a view port with which the rendering is performed can be moved in a 3D space containing the 3D representation of the person. In such cases, multiple renderings of the 3D representation can be performed as the viewport position is updated”, [0081]; Hannuksela et al., RGB, [0063], orientation of a viewport represented by angular coordinates of a coordinate system, [0101], input images 700, such as fisheye images of a camera array or from a camera device with multiple lenses and sensors, is cross blended or stitched 710 onto a spherical image, [0105], RGB video, [0116], Viewport parameters may comprise one or more of spatial location of a reference point (such as a center point), an orientation, extents, and a shape of the viewport. The spatial location may for example be indicated with spherical coordinates, such as yaw and pitch, in a spherical coordinate system. The orientation may for example be indicated with the roll parameters in a spherical coordinate system, where the roll accompanies yaw and pitch of a spatial location, [0169]; Sewell et al., sensors, including those for sensing patient vitals, temperature, [0376], the orientation of the distal tip of the catheter may be measured using a 6-axis position sensor system, computer graphics view of the catheter is rotated until the master input and the computer graphics view of the catheter distal tip motion are coordinated and aligned with the camera view of the graphics scene, [0708]). Regarding claims 10 and 18, Chen, Hannuksela et al., and Sewell et al. disclose the method and CRM of claims 1 and 11. Chen, Hannuksela et al., and Sewell et al. further indicate providing, by the computer, a second sensor data collection associated with the first object in the scene or location, wherein the second sensor data collection is generated from one or more additional sensor data acquisition events conducted at a different time than the sensor data acquisition events of the first sensor data collection; processing, by the computer, the second sensor data collection to generate at least one additional first object dependent viewport representing the first object at the different time; and displaying, by the computer, the at least one additional first object dependent viewport representing the first object at the different time together with the first object dependent viewports to provide a display of first object information associated with multiple times (Chen, “At 310, images are received that depict a person from multiple viewpoints. The images can be received from multiple cameras configured to capture a scene comprising the person from the multiple viewpoints. In at least one embodiment, the cameras comprise video cameras and the captured images comprise video frames depicting the person. The images can be received via one or more wired and/or wireless connections. The images may be received as part of a batch transmission. For example, images from multiple cameras can be received in a batch transmission that is associated with a given moment in time. In embodiments where the images are received from video cameras, multiple such batches can be received that are associated with different frame captures from the various video cameras”, [0047]; Hannuksela et al., A view may be defined as a sequence of pictures representing one camera or viewpoint. The pictures representing a view may also be called view components. In other words, a view component may be defined as a coded representation of a view in a single access unit. In multiview video coding, more than one view is coded in a bitstream. Since views are typically intended to be displayed on stereoscopic or multiview autostereoscopic display or to be used for other 3D arrangements, they typically represent the same scene and are content-wise partly overlapping although representing different viewpoints to the content, [0098]; Referring back to FIG. 128A, in one embodiment, visualization software runs on the master computer 2400 to facilitate real-time driving and navigation of one or more steerable instruments. In one embodiment, visualization software provides an operator at an operator control station, such as that depicted in FIG. 1, with a digitized "dashboard" or "windshield" display to enhance instinctive drivability of the pertinent instrumentation within the pertinent tissue structures, [0702]). Claims 4 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20190139297 A1) and Hannuksela et al. (US 20180167613 A1) and Sewell et al. (US 20120071752 A1) as applied to claim 1 and 11 above, further in view of Westerhoff et al. (US 20180350130 A1). Regarding claims 4 and 13, Chen, Hannuksela et al., and Sewell et al. disclose the method and CRM of claims 1 and 11. Chen and Sewell et al. further indicate the first object base viewport includes a three- dimensional (3D) rendering of the first object and at least one first object dependent viewport includes a two-dimensional (2D) image of the first object (Chen, “The correlated feature keypoints can be used to create 3D feature coordinates for the associated features of the object. A 3D skeleton can be generated using the 3D feature coordinates. One or more 3D models can be mapped to the 3D skeleton and rendered. The rendered one or more 3D models can be displayed on one or more display devices”, abstract, The correlated feature keypoints can be used to create 3D feature coordinates for the associated features of the object. A 3D skeleton for the target object can be generated using the 3D feature coordinates, [0005], Holoportation is a type of 3D capture technology that allows high quality 3D models of people and/or environments to be constructed and transmitted to a viewer in real-time, [0027], The 3D skeleton generator 112 can be configured to generate the 3D skeleton 114 using 2D images 126 and 128 that depict the target object 130 from different perspectives. The images 126 and 128 can be captured by multiple camera devices 120 positioned to capture image of a scene containing the target object 130 from different viewpoints, [0029]; Sewell et al., secondary view, at least one of the two views, [0702]), however, another reference is provided to make this more explicit. Westerhoff et al. teach the first object base viewport includes a three-dimensional (3D) rendering of the first object and at least one first object dependent viewport includes a two-dimensional (2D) image of the first object (“FIG. 2 depicts an example study where the rules have created two Sets of Images. One Set of Images consists of a series of CT images forming a 3D volume, which is depicted in a volume rendered style in the Viewport 1160 in the upper left and in three orthogonal cross sections in the three other viewports in the left half of the screen. The second Set of Images consist of one chest X-Ray, assigned to a single Viewport 1160 covering the right half of the screen and rendering the X-Ray in 2D style. Appropriate data windows have been chosen by the rules to highlight the vasculature in the 3D rendering, as this is a study with contrast, as the rules can determine by the StudyDescription containing the word `contrast”, [0061], Rendering style (can be 2D, oblique, curved, MIP slab, 3D MIP, VRT, shaded VRT, etc.), [0216], Other alternative aspects include methods where the one or more View and Viewport Selection Rules contain protocols for one or more Viewports displaying images in a 3D rendering mode, [0293]) [which is considered base and object dependent interchangeable]. Chen and Hannuksela et al. and Sewell et al. and Westerhoff et al. are in the same art of viewports (Chen, [0081]; Hannuksela et al., [0110]; Sewell et al., [0702]; Westerhoff, [0061]). The combination of Westerhoff et al. with Chen and Hannuksela et al. and Sewell et al. enables different representations of data in different viewports. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the representations of Westerhoff et al. with the invention of Chen and Hannuksela et al. and Sewell et al. as this was known at the time of filing, the combination would have predictable results, and as Westerhoff et al. indicate in this manner the user is presented with images displayed based on their preferences without having to first manually adjust parameters (abstract), which will make the display process more natural and user friendly. Regarding claim 5, Chen, Hannuksela et al., and Sewell et al. disclose the method of claim 1. Chen and Sewell et al. further indicate the first object base viewport includes a two- dimensional (2D) image of the first object and at least one first object dependent viewport includes a three-dimensional (3D) rendering of the first object (Chen, “The correlated feature keypoints can be used to create 3D feature coordinates for the associated features of the object. A 3D skeleton can be generated using the 3D feature coordinates. One or more 3D models can be mapped to the 3D skeleton and rendered. The rendered one or more 3D models can be displayed on one or more display devices”, abstract, The correlated feature keypoints can be used to create 3D feature coordinates for the associated features of the object. A 3D skeleton for the target object can be generated using the 3D feature coordinates, [0005], Holoportation is a type of 3D capture technology that allows high quality 3D models of people and/or environments to be constructed and transmitted to a viewer in real-time, [0027], The 3D skeleton generator 112 can be configured to generate the 3D skeleton 114 using 2D images 126 and 128 that depict the target object 130 from different perspectives. The images 126 and 128 can be captured by multiple camera devices 120 positioned to capture image of a scene containing the target object 130 from different viewpoints, [0029]; Sewell et al., secondary view, at least one of the two views, [0702]), however, another reference is provided to make this more explicit. Westerhoff et al. teach the first object base viewport includes a two- dimensional (2D) image of the first object and at least one first object dependent viewport includes a three-dimensional (3D) rendering of the first object (“FIG. 2 depicts an example study where the rules have created two Sets of Images. One Set of Images consists of a series of CT images forming a 3D volume, which is depicted in a volume rendered style in the Viewport 1160 in the upper left and in three orthogonal cross sections in the three other viewports in the left half of the screen. The second Set of Images consist of one chest X-Ray, assigned to a single Viewport 1160 covering the right half of the screen and rendering the X-Ray in 2D style. Appropriate data windows have been chosen by the rules to highlight the vasculature in the 3D rendering, as this is a study with contrast, as the rules can determine by the StudyDescription containing the word `contrast”, [0061], Rendering style (can be 2D, oblique, curved, MIP slab, 3D MIP, VRT, shaded VRT, etc.), [0216], Other alternative aspects include methods where the one or more View and Viewport Selection Rules contain protocols for one or more Viewports displaying images in a 3D rendering mode, [0293]) [which is considered base and object dependent interchangeable]. Chen and Hannuksela et al. and Sewell et al. and Westerhoff et al. are in the same art of viewports (Chen, [0081]; Hannuksela et al., [0110]; Sewell et al., [0702]; Westerhoff, [0061]). The combination of Westerhoff et al. with Chen and Hannuksela et al. and Sewell et al. enables different representations of data in different viewports. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the representations of Westerhoff et al. with the invention of Chen and Hannuksela et al. and Sewell et al. as this was known at the time of filing, the combination would have predictable results, and as Westerhoff et al. indicate in this manner the user is presented with images displayed based on their preferences without having to first manually adjust parameters (abstract), which will make the display process more natural and user friendly. Claims 6 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20190139297 A1) and Hannuksela et al. (US 20180167613 A1) and Sewell et al. (US 20120071752 A1) as applied to claim 1 above, further in view of Ranjeet et al. (US 20170244959 A1). Regarding claims 6 and 14, Chen and Hannuksela et al. and Sewell et al. disclose the method and CRM of claims 1 and 11. Hannuksela et al. partly teach the scene or location includes a plurality of additional objects in addition to the first object, and the method further comprises: selecting, based on the scene camera, a second object from the plurality of additional objects from one of the viewports of the plurality of viewports; generating, by the computer, a plurality of second object viewports comprising second object information; and displaying, by the computer, the second object viewports, wherein the second object viewports include a second object base viewport and one or more second object dependent viewports, and the second object information in the second object dependent viewports substantially correspond to a real-time position and orientation associated with the second object base viewport to provide a concurrent display of synchronized second object information in each of the second object viewports (primary + other viewports, the primary viewport is switched discretely as the user changes his/her orientation, [0110]-[0115]) however another reference is added to make this explicit. Ranjeet et al. teach the scene or location includes a plurality of additional objects in addition to the first object, and the method further comprises: selecting, based on the scene camera, a second object from the plurality of additional objects from one of the viewports of the plurality of viewports; generating, by the computer, a plurality of second object viewports comprising second object information; and displaying, by the computer, the second object viewports, wherein the second object viewports include a second object base viewport and one or more second object dependent viewports, and the second object information in the second object dependent viewports substantially correspond to a real-time position and orientation associated with the second object base viewport to provide a concurrent display of synchronized second object information in each of the second object viewports (“In the example provided in FIG. 6, the viewports 608, 610, and 612 are depicted as displaying additional objects of interest that have been identified. Similar to the viewport 604, the viewports 608, 610, and 612 may be configured to track these additional objects of interest as they move through multiple views of the video. As discussed above, tracking an object of interest being displayed in a viewport may be terminated for various reasons, including but not limited to the object of interest becoming no longer visible in the video. For example, if the dog pictured in the viewport 604 becomes no longer visible in the video, the viewport 604 may switch to another view containing another object of interest. This may include switching to any one of the views of the viewports 608, 610, 612, or another view of an object of interest that is not displayed in a viewport in the user interface 602. Determining an object of interest to switch to in the viewport 604 may be dependent on the priority of the objects of interest as described above. In one or more implementations, when objects of interest in the viewports 608, 610, and 612 are no longer visible in the video, these viewports may be repopulated with other objects of interest, such as based on the priority of the objects of interest as described above”, [0064]). Chen and Hannuksela et al. and Sewell et al. and Ranjeet et al. are in the same art of viewports (Chen, [0081]; Hannuksela et al., [0110]; Sewell et al., [0702]; Ranjeet, [0064]). The combination of Ranjeet et al. with Chen and Hannuksela et al. and Sewell et al. enables displaying multiple objects. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the objects of Ranjeet et al. with the invention of Chen and Hannuksela et al. and Sewell et al. as this was known at the time of filing, the combination would have predictable results, and as Ranjeet et al. indicate “Techniques described herein provide solutions to problems faced by viewers of videos having multiple views that were captured simultaneously in current video viewing applications” ([0022]) and will allow multiple objects to be tracked, which will be advantageous in crowded scenes. Claims 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20190139297 A1) and Hannuksela et al. (US 20180167613 A1) and Sewell et al. (US 20120071752 A1) as applied to claims 1 and 11 above, further in view of Troy et al. (US 20180157455 A1). Regarding claims 7 and 15, Chen and Hannuksela et al. and Sewell et al. disclose the method of claims 1 and 11. Chen and Hannuksela et al. and Sewell et al. do not disclose the synchronized sensor data includes two or more data types selected from the group consisting of: photosensor image data, thermal image data, radio frequency (RF) data, light detection and ranging (LIDAR) data, terrain elevation data, computer aided design (CAD) drawing data, and building information model (BIM) data. Troy et al. teach the synchronized sensor data includes two or more data types selected from the group consisting of: photosensor image data, thermal image data, radio frequency (RF) data, light detection and ranging (LIDAR) data, terrain elevation data, computer aided design (CAD) drawing data, and building information model (BIM) data (Two examples of side-by-side display of a 3-D CAD model visualization and a panoramic image on a display screen are shown in FIGS. 2 and 3 in the context of an aircraft in production. Similar examples of side-by-side display of a 3-D CAD model visualization and a video image on a display screen would have the same appearance as seen in FIGS. 2 and 3 since a photograph and a frame of a video of the same scene contain the same information. As was the case for synchronized display of panoramic images and 3-D CAD model visualizations, the video image and the 3-D CAD model visualization in the synchronized display disclosed herein would have the same user-selected viewpoint, [0102]). Chen and Hannuksela et al. and Sewell et al. and Troy et al. are in the same art of viewports (Chen, [0081]; Hannuksela et al., [0110]; Sewell et al., [0702]; Troy et al., [0102]). The combination of Troy et al. with Chen and Hannuksela et al. and Sewell et al. enables different representations of data in different viewports. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the representations of Troy et al. with the invention of Chen and Hannuksela et al. and Sewell et al. as this was known at the time of filing, the combination would have predictable results, and as Troy et al. indicate “It would be desirable to provide enhancements that address the shortcomings of each of the above-described approaches by providing methods for combined use of data from physical (i.e., photographic or video) images and 3-D models” ([0008]) which this application purports to solve ([0045]) thereby the combination of applications will increase the number of possible commercial applications by allowing different types of data to be effectively visualized together. Claims 9 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20190139297 A1) and Hannuksela et al. (US 20180167613 A1) and Sewell et al. (US 20120071752 A1) as applied to claims 1 and 11 above, further in view of Zhou et al. (US 20170039765 A1). Regarding claims 9 and 17, Chen and Hannuksela et al. and Sewell et al. disclose the method and CRM of claims 1 and 11. Chen and Hannuksela et al. and Sewell et al. do not disclose the one or more sensor data acquisition events include an unmanned aerial vehicle (UAV) imaging event in which a UAV is navigated in and around the scene or location, and the one or more sensor data types include 2D aerial images acquired by the UAV and information describing locations from which the 2D aerial images were acquired. Zhou et al. teach one or more sensor data acquisition events include an unmanned aerial vehicle (UAV) imaging event in which a UAV is navigated in and around the scene or location, and the one or more sensor data types include 2D aerial images acquired by the UAV and information describing locations from which the2D aerial images were acquired (augmented video feed obtained by a camera of a manned or unmanned aerial vehicle (“UAV”), [0018], UAV-video user experience by adding geo-registered layers from these or other information sources onto aerial video in real-time, [0043], determination is made whether additional map and elevation tiles are available to be rendered based, at least in part, on a direction of travel of the UAV, [0097], a UAV operator could see a no-fly zone or landmark before the camera field of view reached it, [0132], location of the camera on the UAV in the real world, [0152], UAV flies over a short-term or long-term period, circling around an area, [0178]); and (2) each of the plurality of 2D aerial images includes information associated with both of a UAV imaging device location and orientation in the real-life scene or location when that 2D aerial image was acquired (metadata from the UAV platform and attached sensor would provide the exact location, angle, and optical parameters of the sensor for every frame of video, [0139]). Chen and Hannuksela et al. and Sewell et al. and Zhou et al. are in the same art of viewports (Chen, [0081]; Hannuksela et al., [0110]; Sewell et al., [0702]; Zhou et al., abstract). The combination of Zhou et al. with Chen and Hannuksela et al. and Sewell et al. enables use of UAV data. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the UAV data of Zhou et al. with the invention of Chen and Hannuksela et al. and Sewell et al. as this was known at the time of filing, the combination would have predictable results, and as Zhou et al. indicate “Unmanned Aerial Vehicles (“UAVs”) are a critical part of the modern battlefield, providing Intelligence, Surveillance, and Reconnaissance (“ISR”) capabilities for all branches of the armed forces and various civilian uses. In recent years, the number of UAV operations has increased dramatically for monitoring, surveillance, and combat-related missions, with the number of deployed UAVs increasing exponentially. However, due to the limited viewing angle and resolution of UAV video, users on the ground lack the appropriate context and situational awareness to make critical real-time decisions based on the video they are watching. Additionally, the camera angles typically found in UAV video can make even familiar terrain and objects difficult to recognize and understand. Accordingly, it is desirable to provide new systems and techniques to address these and other shortcomings of conventional UAV-video systems.” ([004]-[0005]) providing a substantial commercial benefit to combining inventions by creating a market in the military field. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M ENTEZARI HAUSMANN whose telephone number is (571)270-5084. The examiner can normally be reached 10-7 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent M Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE M ENTEZARI HAUSMANN/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Feb 12, 2024
Application Filed
Aug 22, 2024
Response after Non-Final Action
Oct 08, 2025
Non-Final Rejection — §103, §DP
Feb 10, 2026
Response Filed
Mar 21, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602775
INTERPOLATION OF MEDICAL IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602793
Systems and Methods for Predicting Object Location Within Images and for Analyzing the Images in the Predicted Location for Object Tracking
2y 5m to grant Granted Apr 14, 2026
Patent 12602949
SYSTEM AND METHOD FOR DETECTING HUMAN PRESENCE BASED ON DEPTH SENSING AND INERTIAL MEASUREMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597261
OBJECT MOVEMENT BEHAVIOR LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12597244
METHOD AND DEVICE FOR IMPROVING OBJECT RECOGNITION RATE OF SELF-DRIVING CAR
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
98%
With Interview (+21.6%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 863 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month