Prosecution Insights
Last updated: April 19, 2026
Application No. 18/520,991

SYSTEM AND METHOD FOR PROVIDING SCENE INFORMATION

Final Rejection §103
Filed
Nov 28, 2023
Examiner
TOPGYAL, GELEK W
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Elbit Systems Ltd.
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
355 granted / 604 resolved
+0.8% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
35 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
25.4%
-14.6% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Arguments Applicant's arguments filed 10/7/2025 have been fully considered but they are not persuasive. In re pages 8-10, applicants present the argument that Deng fails to teach the limitations of “providing scene related information” to “a user at a remote station” and “designating at the remote station, by the user, an ROI/target on the displayed scene representation”. Applicants argue in pages 11-12 that Seydoux fails to teach the last limitation of “transmitting the real-world data from the at least one sensor to the remote station in accordance with the associated PLV”. In support of the arguments above, applicants state that Deng subject matter is aimed at achieving the exact opposite of the claimed invention and that the present invention is directed at providing a user at a remote station real-world data descriptive of the ROI/target for displaying real-world data corresponding to the designated ROI/target. In response, the examiner respectfully disagrees. It appears that the term “real world data” and its interpretation is at the core of the differences between examiner’s interpretation and application of the prior art versus what is being argued by the applicants. The claim states “designating, at the remote station, by the user, an ROI/target on the displayed scene representation”, followed by “displaying real-world data corresponding to the designated ROI/target”. The term “real-world data” is vague and opens itself to many different levels of interpretation. Prior art Deng in paragraphs 96 and 97 initially teaches that the target user (remote user) is interacting with the video captured in the real-world environment. Data that corresponds to the real world environment, regardless of it being obscured or whether they are replaced with other virtual objects, still is “displaying real-world data” that “corresponds to the designated ROI/target”. Further, paragraphs 99-105 further expands on this interaction by the remote user and also clearly states the following: “generating and displaying a virtual representation of the real-world environment based on that gathered data may provide an automatic pivot from a virtual representation of an object of interest into a rich representation of the object of interest that more closely resembles a real-world appearance of the object of interest”. Therefore, the claim language is not restricted to the interpretation as that of the applicant’s representative. Therefore, in addition to the above discussion, it is also the opinion of the examiner that the applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Similarly, the arguments with regard to Seydoux fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Seydoux clearly teaches the requirement of the claim, which is to associate a priority level value with the designated ROI/target in paragraphs 41-45 wherein a region P1 of the image/frame is given a priority value based on the user’s input/gaze direction. Further paragraphs 60-67 and 77-79 teaches transmitting the user’s measurements from a remote terminal to the ground station and for updating the video for transmission to the remote terminal from the ground station. Therefore, it is the position of the examiner that Seydoux isn’t an accurate teaching required by the limitations in question. The prior art rejection is therefore maintained by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Deng et al. (US 2019/0206141) in view of Seydoux et al. (US 2016/0148431). Regarding claim 1, Deng teaches a system for providing scene related information from a scene including at least one real carrier platform, to a user at a remote station, the system (Fig. 1, real environment shot by user device 202 and remote user using target device 208 interacts with the real environment) comprising: at least one memory configured to store data and program code instructions (paragraph 53); and at least one processor configured to execute program code instructions stored in the memory for enabling performing the following steps (paragraph 53): generating, at the remote station, a representation of the scene (at least Figs. 6-8 and paragraph 40 shows at Target device 208 generating for display, a representation of real environment); displaying, at the remote station, the scene representation (Figs. 6-8 shows what a target device displays the environment captured by first user’s device 202); designating, at the remote station (paragraph 97 teaches user remotely view and interacts with the people and objects within the real environment), by the user, an ROI/target on the displayed scene representation (paragraph 97 teaches wherein the target user on the target device interacts with the objects and/or people in the real environment. Additionally, paragraphs 99-104 teaches further interactions that allows the remote/target user to designate regions within the field of view); displaying real-world data corresponding to the designated ROI/Target (paragraphs 96 and 99-104 teaches displaying the corresponding designated ROI/target area to the target user that “corresponds” to the designated ROI/target); However, Deng fails to teach, but in an analogous art, Seydoux teaches: associating a priority level value (PLV) with the designated ROI/target (paragraph 41-45 teaches wherein a region P1 of the image/frame is given a priority value based on the user’s input/gaze direction. Paragraphs 60-62 teaches measuring the input of the region of the entire image); and transmitting the real-world data from the at least one sensor to the remote station in accordance with the associated PLV (paragraphs 60-67 and 77-79 teaches transmitting the user’s measurements from the remote terminal to the ground station and for updating the video for transmission to the remote terminal from the ground station). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Seydoux because said incorporation allows for the benefit of reducing latency by requiring only a cropped portion of the video to be transitioned to a different resolution (paragraphs 78-79). Regarding claim 2, Seydoux teaches the claimed wherein the associating of the PLV is performed by the user at the remote station (paragraph 41-45 teaches wherein a region P1 of the image/frame is given a priority value based on the user’s input/gaze direction. Paragraph 60-62 teaches measuring the input of the region of the entire image). The prior motivation as discussed above is incorporated herein. Regarding claim 3, Seydoux teaches the claimed wherein the low-latency scene representation is based on local data (paragraphs 20, 78 and 90). The prior motivation as discussed above is incorporated herein. Regarding claim 4, Seydoux teaches the claimed wherein the low-latency scene representation is based on data objects received from the scene (paragraphs 20, 78 and 90, low latency based on P1 section being cropped/truncated). The prior motivation as discussed above is incorporated herein. Regarding claim 5, Seydoux teaches the claimed wherein the steps further comprise selecting at least one portion of the at least one ROI for displaying the selected at least one portion of the at least one ROI at the remote station at higher resolution compared to a non-selected portion (paragraph 41-45 teaches wherein a region P1 of the image/frame is given a priority value based on the user’s input/gaze direction. Paragraphs 60-62 teaches measuring the input of the region of the entire image. Paragraphs 27, 55-57 and 78-79 teaches higher resolution region P1). The prior motivation as discussed above is incorporated herein. Regarding claim 6, Seydoux teaches the claimed wherein the steps further comprise: selecting a portion of the at least one ROl/target for displaying the selected portion of the at least one ROI/target at the remote station at lower latency compared to a non-selected portion (paragraphs 20, 78 and 90, low latency based on P1 section being cropped/truncated). The prior motivation as discussed above is incorporated herein. Regarding claim 7, Deng teaches the claimed wherein the at least one sensor is accommodated by one more movable platforms (Deng teaches sensor 210 on user device 202 is on a movable platform type device as in paragraphs 46-47). Regarding claim 8, Deng teaches the claimed further comprising identifying an object type of the designated ROI/target (paragraphs 5-7, 27-28 and 30 teaches identifying types of object). Regarding claim 9, Seydoux teaches the claimed 9. The system of claim 8, wherein the object type can be one of a moving and non-moving object type (paragraphs 5-7, 27-28 and 30 teaches identifying types of object including furniture, structure, person/people). Regarding claim 10, Seydoux teaches the claimed wherein the object type can be one of a high-interest and low-interest object type (paragraphs 30, 97 and 102-103 teaches identifying objects of higher interest requiring privacy filters. Some identified objects don’t require privacy filters). Method claims 11-20 are rejected for the same reasons as discussed in claims 1-10, respectively. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GELEK W TOPGYAL whose telephone number is (571)272-8891. The examiner can normally be reached M-F (9:30-6 PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GELEK W TOPGYAL/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Jul 10, 2025
Non-Final Rejection — §103
Oct 07, 2025
Response Filed
Jan 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601836
RADIO-WAVE SENSOR INSTALLATION ASSISTANCE DEVICE, COMPUTER PROGRAM, AND RADIO-WAVE SENSOR INSTALLATION POSITION DETERMINATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12597341
INSTALLATION SUPPORT DEVICE FOR RADIO WAVE SENSOR, COMPUTER PROGRAM, METHOD OF DETERMINING INSTALLATION POSITION OF RADIO WAVE SENSOR, AND METHOD OF SUPPORTING INSTALLATION OF RADIO WAVE SENSOR
2y 5m to grant Granted Apr 07, 2026
Patent 12592263
VIDEO VARIATION EFFECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586607
MULTIMEDIA PROCESSING METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12567445
VIDEO REMIXING METHOD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
78%
With Interview (+19.3%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month