Prosecution Insights
Last updated: April 19, 2026
Application No. 18/326,262

SYSTEM AND METHOD FOR AUTOMATIC DETERMINATION OF SIGN VISIBILITY

Non-Final OA §103§112
Filed
May 31, 2023
Examiner
CADEAU, WEDNEL
Art Unit
2632
Tech Center
2600 — Communications
Assignee
Digital Natives Ltd.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
381 granted / 532 resolved
+9.6% vs TC avg
Strong +20% interview lift
Without
With
+19.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
574
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
75.6%
+35.6% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 532 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Prior arts cited in this office action: Brewington et al. (US 20220349719 A1, hereinafter “Brewington”) Mori et al. (US 20200090375 A1, hereinafter “Mori”) Bijlani et al. (US 20200294085 A1, hereinafter “Bijlani”) Anand et al. (US 20190250981 A1, hereinafter “Anand”) Claim Objections Claim 8 is objected to because of the following informalities: the word “dispolay” should be replace with the word “display”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites in part “further comprising determining an opportunity to see score and a likelihood to see score based in part on the visibility score.”. it is not clear to the office what is considered an opportunity to see, likelihood to see and how one of ordinary skill in the art would be able to ascertain what is considered an opportunity to see and a likelihood to see. Appropriate explanation and or correction are respectfully requested Claim 10 recites in part “further comprising providing an out of home display marketplace at least partly based on the visibility index scores”. It is not clear to the office what applicant means by that statement. Does applicant mean providing a display that can be used in out of home display market place environment? Appropriate explanation and or correction are respectfully requested. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-8, 11 and 13-19 are rejected under 35 U.S.C. 103 as being unpatentable over Brewington et al. (US 20220349719 A1, hereinafter “Brewington”) and in view of Mori et al. (US 20200090375 A1, hereinafter “Mori”). Regarding claims 1, 14 and 17: Brewington teaches A method/system/computer readable medium (Brewington [0003], [0011]-[0012]) for a digital system managing a plurality of out of home displays (here the preamble is cited as intended use because the claim does not include where and how the display is being managed but only addressed bellow for completeness’ sake; claim 17: the language computer-readable mediums storing instruction is interpreted as not invoking 101 based on the definition given by applicant in paragraph [0120] for example) comprising: receiving display positional data of a physical display (Brewington Abstract, [0009], [0012], where Brewing ton teaches The interactive landmark localization system may then determine the precise location of the user based on the location of the Burger King® sign and the estimated distance of the user from the Burger King® sign. The interactive landmark localization system may determine the orientation of the user based on the position and/or orientation of the Burger King® sign within the camera view) ; collecting image data based on location data of the display positional data (Brewington [0050], [0009], where Brewington teaches In some implementations, when the identified landmarks are not in the user's field of view, the client device may switch to an augmented reality (AR) mode, present a camera view, and provide the real-world imagery included in the camera view to the interactive landmark localization system. The interactive landmark localization system may then identify landmarks included in the camera view for example, using object detection to determine the precise location and orientation of the user); determining a visibility index score for the physical display based in part on automated analysis of the image data (Brewington [0050], where Brewington teaches The server device 14 may use the geographic information for each landmark, including photographs of the landmark, two-dimensional or three-dimensional geographic coordinates for the landmark, a textual description of the landmark, the height of the landmark, the orientation that the landmark faces, the size of the landmark, the appearance of the landmark, the name of the landmark, and viewshed for the landmark for various conditions and/or at various times of day to determine a visibility score or ranking for each landmark from a particular location and orientation of a user. The server device 14 may then select the landmarks having the highest visibility scores or rankings for directing the user to a destination or meeting location. For example, the server device 14 may assign a higher visibility score to a landmark having a viewshed that includes the user's current location than a landmark having a viewshed that does not include the user's current location. Furthermore, the server device 14 may assign a higher visibility score to a landmark viewable from the direction that the user is currently facing than a landmark that is not viewable from the direction that the user is currently facing). Brewington fails to explicitly teach wherein a digital system managing displays. However, the visibility judgment unit 12 calculates an index value of the object extracted at S2 for judging a degree of visibility when viewed from the driver by means of a predetermined system. This index value is defined as a “visibility index value” (visibility index value) for explanation. This index value is calculated so that the more easily the object is recognized visually, the higher the index value becomes, and the more hardly the object is recognized visually, the lower the index value becomes, for example. The visibility judgment unit 12 uses feature information extracted from the image and the information from the sensor in the sensor unit 108, for example, to calculate this index value. In a calculation system according to the first embodiment, this index value is calculated by using at least one of the object distances, or a position and a size of the object in the screen and the image. The object distance is a distance between the camera 2 or the point of view of the own vehicle as described above and the target object. For example, the relatively larger the object distance becomes, the lower the index value becomes. Further, for example, the relatively smaller the object size in the image becomes, the lower the index value becomes. For example, the index value becomes lower as the position of the object in the image becomes a peripheral position relatively far from the central point. The visibility judgment unit 12 determines whether AR display regarding the target object is to be executed or not, an AR image, whether processing is to be executed or not, and the content of the processing (which is described as a type) on the basis of the classification of a visibility judgment result at S4. For example, in a case where the classification is the first classification, it is determined that the AR display is not to be executed as a first type. In a case where the classification is the second classification, it is determined that the AR display is executed together with image processing as a second type. (S6) The AR image generator unit 13 generates an AR image regarding an object to be displayed as AR on the basis of determination of the type at S5. At that time, the AR image generator unit 13 uses the image processing unit 14 to execute image processing for heightening visibility regarding the object whose visibility is low as the second classification and the AR image. The image processing unit 14 applies the enlarging process or the like according to the type as the image processing for the object area, and outputs image data after processing. The AR image generator unit 13 generates the AR image by using the image data after processing (Mori [0065]-[0084], figs. 3 and 7-9). Therefore, taking the teachings of Brewington and Mori as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to manage/adjust the display based on the visibility index score, in order to allow the user looking at the display to better see the information displayed to increase guidance and effectiveness. The display can be in any location and for any purpose. Regarding claims 2, 15 and 18: Brewington in view of Mori teaches wherein determining the visibility index score for the physical display based in part on the automated analysis of the image data comprises detecting display position and display area in each image and calculating the visibility index score based on a set of images and their associated angles, distance, and display area (Brewington [0009], [0033], [0050], [0053], [0068]; Mori [0055], [0065]-[0083], figs. 3 and 7-9; More specifically, the interactive landmark localization system may retrieve the locations, orientations, sizes, etc., of landmarks from a landmark database. The precise location and orientation of the user may be based on the locations, orientations, sizes, etc. of identified landmarks included in the camera view. For example, if a Walgreens® sign in the camera view faces south and the viewpoint of the camera view is directly opposite the Walgreens® sign, the interactive landmark localization system may determine that the user is facing north. In another example, the distance from the user's client device to the landmark may be determined based on the scale of the camera view. If the camera view is zoomed-in at a high level and a Burger King® sign is in the camera view, the interactive landmark localization system may determine that the user is nearby the Burger King® sign. On the other hand, if the camera view is zoomed-out and the Burger King® sign is in the background of the camera view, the interactive landmark localization system may determine that the user is far away from the Burger King® sign. The interactive landmark localization system may then determine the precise location of the user based on the location of the Burger King® sign and the estimated distance of the user from the Burger King® sign. The interactive landmark localization system may determine the orientation of the user based on the position and/or orientation of the Burger King® sign within the camera view). Regarding claims 3, 16 and 19: Brewington in view of Mori teaches wherein determining the visibility index score for the physical display based in part on the automated analysis of the image data comprises: for each image instance of the collected image data, determining image area of the display in each image instance, determining an instance location visibility score for a location of each image instance, and determining the visibility index score by combining the instance location visibility scores for each image instance (Brewington [0050], [0053], [0068]; Mori [0065]-[0083], figs. 3 and 7-9; wherein the combination teaches More specifically, the interactive landmark localization system may retrieve the locations, orientations, sizes, etc., of landmarks from a landmark database. The precise location and orientation of the user may be based on the locations, orientations, sizes, etc. of identified landmarks included in the camera view. For example, if a Walgreens® sign in the camera view faces south and the viewpoint of the camera view is directly opposite the Walgreens® sign, the interactive landmark localization system may determine that the user is facing north. In another example, the distance from the user's client device to the landmark may be determined based on the scale of the camera view. If the camera view is zoomed-in at a high level and a Burger King® sign is in the camera view, the interactive landmark localization system may determine that the user is nearby the Burger King® sign. On the other hand, if the camera view is zoomed-out and the Burger King® sign is in the background of the camera view, the interactive landmark localization system may determine that the user is far away from the Burger King® sign. The interactive landmark localization system may then determine the precise location of the user based on the location of the Burger King® sign and the estimated distance of the user from the Burger King® sign. The interactive landmark localization system may determine the orientation of the user based on the position and/or orientation of the Burger King® sign within the camera view). Regarding claim 4: Brewington in view of Mori teaches wherein collecting image data based on location data of the display positional data comprises collecting image data samples in proximity to a location of the display indicated by the display positional data (Brewington [0009], [0050], [0073]-[0075]; Mori [0065]-[0083], figs. 3 and 7-9; wherein the combination teaches The precise location of the user may be a location determined with a greater degree of accuracy than the location determined according to a positioning device, such as a GPS. More specifically, the interactive landmark localization system may retrieve the locations, orientations, sizes, etc., of landmarks from a landmark database. The precise location and orientation of the user may be based on the locations, orientations, sizes, etc. of identified landmarks included in the camera view). Regarding claim 5: Brewington in view of Mori teaches wherein collecting image data samples is collected from a street-level mapping data set (Brewington [0003]-[0006], [0050]; Mori [0088], [0135]; where the combination teaches For example, as a signboard, an installed position thereof may change, or the described content thereof may change. Further, as a building, a state of appearance thereof may change due to under construction or the like. In a case where the registered content of map information in a DB in a car navigation unit 106 is not updated, it is impossible to acquire actual and the latest information of the object in real time. Thus, in this modification example, a function to update the information content of a DB in a DB unit 109 to the latest state so as to address a change in appearance of the object as soon as possible is provided. Herewith, each user can acquire the information of the object in the latest state as an AR image when an AR function is utilized, whereby it is possible to heighten convenience. In addition, maintenance work of the DB becomes easy for the business operator). Regarding claim 6: Brewington in view of Mori teaches further comprising performing computer vision processing of the image data samples and thereby detecting a presence of the physical display in the image data (Brewington [0003]-[0006], [0036], [0050]; Mori [0088], [0115], [0151], where the combination teaches The stable regions may be extracted from the template landmark using a scale-invariant feature transform (SIFT), speeded up robust features (SURF), fast retina keypoint (FREAK), binary robust invariant scalable keypoints (BRISK), or any other suitable computer vision techniques); Regarding claim 7: Brewington in view of Mori teaches further comprising collecting at least one supplemental data input associated with the physical display; and wherein determining the visibility index score for the physical display based in part on the automated analysis of the image data comprises determining the visibility index score for the physical display based in part on automated analysis of the image data in combination with at least one supplemental data input (Brewington [0003]-[0006], [0036], [0050]-[0053]; Mori [0088], [0115], [0135],[0151]; where the combination teaches at least More specifically, the interactive landmark selection module 44 may obtain several template landmarks from a database. The template landmarks may include signs, or other prominent physical objects. The interactive landmark selection module 44 may identify visual features of each of the template landmarks by detecting stable regions within the template landmark that are detectable regardless of blur, motion, distortion, orientation, illumination, scaling, and/or other changes in camera perspective). Regarding claim 8: Brewington in view of Mori teaches wherein collecting image data based on the location data of the dispolay positional data further comprises identifying possible locations in proximity to the display and collecting image data samples associated with at least a subset of identified locations, wherein identifying possible locations comprises modeling viewable range from a location indicated in the display positional data, factoring in display direction and geographic features (Brewington [0003]-[0006], [0009],[0036], [0050]-[0053]; Mori [0052], [0084], [0088], [0115], [0135],[0151], fig. 6; the combination teaches at least If the identified landmarks are not in her field of view, the interactive landmark localization system may send other landmarks to the user’s client device. For example, the interactive landmark localization system may estimate that the user is facing north and may identify landmarks which are to the north of the user’s current location. If the user does not see these landmarks, the interactive landmark localization system may estimate that the user is facing south and may identify landmarks which are to the south of the user’s current location). Regarding claim 11: Brewington in view of Mori teaches further comprising providing a programmatic interface to visibility index score of the physical display (Brewington [0099]; where the combination teaches For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs).). Regarding claim 13: Brewington in view of Mori teaches wherein the physical display is a display type selected from the set of billboards, digital signs, urban panels, and spectaculars (Brewington [0004]-[0005]; Mori [0002]-[0003]). Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Brewington et al. (US 20220349719 A1, hereinafter “Brewington”) and in view of Mori et al. (US 20200090375 A1, hereinafter “Mori”) and in view of Anand et al. (US 20190250981 A1, hereinafter “Anand”). Regarding claim 9: Brewington in view of Mori fails to teach further comprising determining an opportunity to see score and a likelihood to see score based in part on the visibility score (Note: interpreted, for example, as displaying the score for visualization). Anand teaches FIG. 12 shows a second example of the display image. B1 indicates the region of interest. As indicated by B2 in FIG. 12, the icon, which is the reference image, is not clear in the analysis image. That is, some of the reference pixels do not exist in the analysis image, and if the function f is a unit function, the shape index S takes a value of less than one. In the case of such an unclear foreground, both the visibility index and the shape index take a small value (Anand [0168]-[0169], figs. 7,12). Therefore, taking the teaching of Brewington, Mori and Anand as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to make the visibility score determined for the display available such that appropriate correction or action can be taking whenever possible. Regarding claim 10: Brewington in view of Mori and in view of Anand teaches further comprising providing an out of home display marketplace at least partly based on the visibility index scores(Mori [0065]-[0084], figs. 3 and 7-9; Anand [0168]-[0169], figs. 7,12). Claims 12 is rejected under 35 U.S.C. 103 as being unpatentable over Brewington et al. (US 20220349719 A1, hereinafter “Brewington”) and in view of Mori et al. (US 20200090375 A1, hereinafter “Mori”) and in view of Bijlani et al. (US 20200294085 A1, hereinafter “Bijlani”). Regarding claim 12: The combination fails to explicitly teach further comprising receiving display content for the physical display and providing display design creative feedback based in part on the visibility index score. However, Bijlani teaches Many modifications to the depicted environments may be made based on design and implementation requirements. For example, in at least one embodiment, the billboard aesthetics and safety rating generator 110A, 110B may generate a 3D model and provide simulation and rating information to local authorities so that the local authorities may review the model and simulation in advance and provide their own feedback to a user before the user may finalize designing a billboard (Bijlani [0043]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to provide feedback regarding the performance of the display such the improvement can be made of the design when necessary next time around which can be the quality of the display, the shape of the display that would make viewing easier, etc. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEDNEL CADEAU/Primary Examiner, Art Unit 2632 November 12, 2025
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586241
POSITION DETERMINATION METHOD, DEVICE, AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12573052
METHOD AND APPARATUS FOR IMAGE SEGMENTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573022
ANOMALY DETECTION FOR COMPONENT THROUGH MACHINE-LEARNING BASED IMAGE PROCESSING AND CONSIDERING UPPER AND LOWER BOUND VALUES
2y 5m to grant Granted Mar 10, 2026
Patent 12573076
POSITION MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567178
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
91%
With Interview (+19.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 532 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month