Prosecution Insights
Last updated: April 19, 2026
Application No. 17/933,201

INTELLIGENT GENERATION OF THUMBNAIL IMAGES FOR MESSAGING APPLICATIONS

Final Rejection §103
Filed
Sep 19, 2022
Examiner
DRYDEN, EMMA ELIZABETH
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Citrix Systems Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-3.7% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for priority based on a National Stage application PCT/CN2022/116746 filed on 09/02/2022. It is noted, however, that applicant has not filed a copy of the application and the application could not be located in order to consider the priority date. The applicant should provide a copy of the PCT application in order to be granted the earlier priority date in the instant application. Response to Amendment The amendment filed 01/20/2026 has been entered. Claims 1-20 remain pending in the application. Response to Arguments Applicant' s arguments, see pg. 1 of the remarks, filed 01/20/2026, with respect to the claim objections have been fully considered and are persuasive. Accordingly, the claim objections to claims 5, 14, and 20 from the Non-Final Office Action of 10/20/2025 has been withdrawn. However, “the content associated with the topic” should be introduced in claims 5, 14, and 20; see the updated claim objection below. Applicant's arguments, see pg. 2 of the remarks, filed 01/20/2026, with respect to the amended independent claims have been considered but are moot because the new ground of rejection relies on a new combination of references. In response to the remarks regarding Zhang and Dai individually, the examiner notes that Zhang states in paragraph 153, cited in the rejection of claim 1, that the device for generating thumbnails based on information streams may be a messaging device. Further, paragraph 156 of Zhang states that messages may be stored on the device to support operations/applications. Thus, the method may be performed on a messaging device running messaging applications. However, further details regarding a number of most recent text messages analyzed are omitted, as described in the new grounds of rejection below. Though Dai does not perform text analysis, the methods are performed in real-time in a messaging system. In the combination of Zhang in view of Dai, Dai is relied upon to teach wherein the thumbnail generation is in response to a determination of a sending of a message including an image within the messaging session; and sending the generated thumbnail image with the message to another computing device. Since the thumbnail generation is in response to the sending of a message, this means the method is performed “in real-time as messages are being transmitted in the electronic messaging system”. Dai’s method is performed during instant messaging, see paragraph 11: “a method and device for transmitting pictures in instant messaging”. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Specification The disclosure is objected to because of the following informalities: In paragraph 58, "thumbnail image generator module 538" should read "thumbnail image generator module 528" In paragraph 62: "messaging session. to which the" should read "messaging session to which the" Appropriate correction is required. Claim Objections Claims 5, 14, and 20 are objected to because of the following informalities: “includes the content associated with the topic” should read “includes content associated with the topic”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 9-12, 14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (CN Patent No. 114220115 A), hereinafter Zhang, in view of Gong et al. (U.S. Patent No. 2018/0205681 A1), hereinafter Gong, in further view of Dai et al. (CN Patent No. 107306219 A), hereinafter Dai. Regarding claim 1, Zhang teaches a method comprising: determining, by a computing device (Zhang, para 155: “processor 1102”), a topic of an electronic messaging session (Zhang, para 84-85: “Perform natural language processing on the text in the information flow to obtain the text semantics of the text… the information flow includes text and its corresponding multiple images”; implemented on a messaging device, para 153: “device 1100 for generating thumbnails based on information streams according to an exemplary embodiment. For example, the apparatus 1100 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device”); and by the computing device: determining contents of an image associated with the topic of the messaging session (Zhang, para 77: “the image semantics of each candidate image are matched with the text semantics of the text”; para 88: “The image semantics of the candidate image may specifically refer to the categories of each key element in the candidate image”; performed on a target image, para 77: “the image semantics of each candidate image are matched with the text semantics of the text, and at least one candidate image is determined as the target image from the multiple candidate images; based on the category and preset display size of the target image, computer vision technology is used to process the target image to generate a thumbnail”); generating a thumbnail image to include the contents of the image associated with the topic of the messaging session (Zhang, para 102: “the positions of the key elements in the target image should be considered first, and a candidate area including the key elements should be determined in the target image… the candidate area is processed using computer vision technology to generate a thumbnail”; see Figure 3). Zhang fails to explicitly teach 1) wherein the determining a topic is performed in real-time as messages are being transmitted in the electronic messaging system, by analyzing the most recent N messages of the electronic messaging session, where N is a positive integer and 2) wherein the thumbnail generation is in response to a determination of a sending of a message including an image within the messaging session; and sending the generated thumbnail image with the message to another computing device. However, Gong teaches a similar method (Gong, abstract: “improve the functionality of electronic messaging and imaging software and systems by determining topics of electronic communications between users and generating customized media content items based on such topics”) wherein the determining a topic is performed in real-time as messages are being transmitted in the electronic messaging system (Gong, step 410 in FIG. 4, para 42: “system identifies (405) an electronic communication (also referred to as a “conversation” in this Application) between two or more users in order to analyze the communication (410) to identify one or more topics”; analysis occurs in real-time – see current time in para 35 and current set of communications in para 44), by analyzing the most recent N messages of the electronic messaging session, where N is a positive integer (Gong, 2 most recent messages are analyzed in FIG 5A, para 44: “FIG. 5A displays an exemplary screenshot of electronic communications (text messages in this example) between Kirk and Yunchao. In this example, the system may analyze the communications between Kirk and Yunchao to determine that the topic of where to have lunch is associated with the communications”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the relevant teachings of Gong with the method of Zhang in order to generate image content based on the most recent messages sent (Gong, see FIG. 5C wherein the generated image pertains to the topic of the last two messages sent). If the messages analyzed are too old, the thumbnail image generated may be irrelevant to the current conversation, thus rendered useless to the user. In the combination of Zhang in view of Gong, a person of ordinary skill in the art would be able to apply the methods taught by Zhang to the most recent N messages in the messaging device of Zhang, as similarly demonstrated by the teachings of Gong. PNG media_image1.png 436 242 media_image1.png Greyscale Further, Dai teaches a method for sharing images between two computing devices in a thumbnail format (Dai, para 22: “wherein the picture selection module is used to select a picture selected by a user as a picture to be sent, the thumbnail generation module generates a thumbnail of the picture to be sent”; para 24: “The message server is used to forward message messages between the first mobile terminal and the second mobile terminal”) wherein responsive to a determination of a sending of a message including an image within the messaging session, performing subsequent thumbnail generation (Dai, para 44: “Step S2: The first mobile terminal generates a thumbnail based on the picture to be sent”); and sending the generated thumbnail image with the message to another computing device (Dai, para 50: “the first mobile terminal generates a message and sends it to the message server”; para 52-53: “After receiving the message, the second mobile terminal downloads the picture and its thumbnail… The second mobile terminal downloads the thumbnail image first. The original image will be downloaded only when the user clicks to view the image further”). Zhang teaches a method regarding the generation of thumbnail images using a messaging device, but fails to explicitly teach the steps of sending the thumbnail image. Dai teaches the aforementioned method for sharing images between two computing devices in a thumbnail format. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the thumbnail generation and sending in response to an image message, as taught by Dai, with the method/messaging device of Zhang in order to increase the image transmission speed between messaging devices (Dai, para 30: “Compared with the existing technology, the present invention can increase the loading speed of pictures during communication, improve user experience, avoid program crashes, and adapt to networks with lower transmission rates”). A person having ordinary skill in the art would be able to carry out the natural language processing method of Zhang in response to the sending of a message with an image in the messaging device of Zhang, similar to the method taught by Dai. Additionally, similar to the combination of Zhang in view of Gong, Dai also teaches wherein the method is performed in real-time as messages are being transmitted in the electronic messaging system (Thumbnail is generated right before the image is sent), and could be implemented in the combined method. Regarding claim 2 (dependent on claim 1), Zhang in view of Gong and Dai teaches wherein the contents of the image associated with the topic is centered in the thumbnail image (Zhang, see Figure 3; para 106: “as shown in Figure 3, a schematic diagram of generating a thumbnail when the candidate area in a target image of a general category meets the preset display size and each key element in the candidate area is complete”). Regarding claim 3 (dependent on claim 1), Zhang in view of Gong and Dai teaches wherein the determining the topic of the electronic messaging session includes applying natural language processing (NLP) to one or more messages in the messaging session (Zhang, para 86: “natural language processing may be, for example, a TextRank algorithm or a Lexrank algorithm, etc. The TextRank algorithm or the Lexrank algorithm may automatically calculate the weight of each word in the text to extract keywords in the text as the text semantics of the text”). Regarding claim 5 (dependent on claim 1), Zhang in view of Gong and Dai teaches wherein the determining of the contents of the image associated with the topic of the messaging session includes determining coordinates of a portion of the image which includes the contents associated with the topic of the messaging session (Zhang, see Figure 3 where the location of the key element is identified in order to create the thumbnail; para 44-45: “configured to determine a candidate region in the target image based on positions of each key element in the target image; A generating subunit is configured to perform computer vision processing on the candidate area based on the preset display size to generate the thumbnail”). Regarding claim 9 (dependent on claim 1), Zhang in view of Gong and Dai teaches wherein the computing device is a client (Dai, first mobile terminal is a client with regard to a file server, para 14: “After obtaining the URL of the picture to be sent and its thumbnail on the file server, the first mobile terminal generates a message”) and the another computing device is a server (Dai, para 50: “message server”; see also para 52-53 with regard to information sent to the second mobile terminal). Regarding claim 10 (dependent on claim 1), Zhang in view of Gong and Dai teaches wherein the computing device is a server (Dai, first mobile terminal performs the operations to generate the thumbnail and sends to the message server, see para 50) and the another computing device is a client (Dai, message server receives the image from the first mobile terminal). Regarding claim 11, Zhang teaches a computing device comprising: a processor (Zhang, para 155: “processor 1102”); and a non-volatile memory storing computer program code that when executed on the processor causes the processor to execute a process (Zhang, para 164: “a non-transitory computer-readable storage medium including instructions is also provided, such as a memory 1104 including instructions, which can be executed by the processor 1120 of the device 1100 to perform the above method. For example, the non-transitory computer-readable storage medium may be a ROM”). All further claim limitations are met and rendered obvious by Zhang in view of Gong and Dai because the method steps of claim 1 are the same as claim 11. Regarding claim 12, all claim limitations are met and rendered obvious by Zhang in view of Gong and Dai because the method steps of claim 3 are the same as claim 12. Regarding claim 14, all claim limitations are met and rendered obvious by Zhang in view of Gong and Dai because the method steps of claim 5 are the same as claim 14. Regarding claim 18, Zhang teaches a non-transitory machine-readable medium encoding instructions that when executed by one or more processors cause a process to be carried out (Zhang, para 59: “machine-readable medium having instructions stored thereon, which, when executed by one or more processors, enables the device to execute the method for generating thumbnails based on information flow as described in any one of the first aspects above”). All further claim limitations are met and rendered obvious by Zhang in view of Gong and Dai because the method steps of claim 1 are the same as claim 18. Regarding claim 19 (dependent on claim 18), Zhang in view of Gong and Dai teaches wherein the determining the topic of the electronic messaging session includes one of applying natural language processing (NLP) to one or more messages in the messaging session (Zhang, para 86: “natural language processing may be, for example, a TextRank algorithm or a Lexrank algorithm, etc. The TextRank algorithm or the Lexrank algorithm may automatically calculate the weight of each word in the text to extract keywords in the text as the text semantics of the text”) or applying a machine learning (ML) model to one or more messages in the messaging session. Regarding claim 20, all claim limitations are met and rendered obvious by Zhang in view of Gong and Dai because the method steps of claim 5 are the same as claim 20. Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Gong, Dai, and Tian et al. (Tian, Y., Wang, W., Wang, X., Rao, J., Chen, C., & Ma, J. (2010, October). Topic detection and organization of mobile text messages. In Proceedings of the 19th ACM international conference on Information and knowledge management (pp. 1877-1880).), hereinafter Tian. Regarding claim 4 (dependent on claim 1), Zhang in view of Gong and Dai fails to teach wherein the determining the topic of the electronic messaging session includes applying a machine learning (ML) model to one or more messages in the messaging session. However, Tian teaches a method wherein determining the topic of an electronic messaging session includes applying a machine learning (ML) model to one or more messages in the messaging session (Tian, using Latent Dirichlet Allocation, pg. 1879, section 3: “we first trained a topic model with Latent Dirichlet Allocation (LDA) to measure the semantic similarity between adjacent candidate conversations, and then combining with temporal similarity, we constructed a compositive relevancy vector to process the final candidate conversation consolidation”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the machine learning model of Tian with the method of Zhang in view of Gong and Dai in order to improve the topic detection using a trained probabilistic model (Tian, pg. 1879, section 3.1: “LDA is a generative probabilistic model which is based on the hypothesis that a document can be represented as a mixture of different topics, each of which is also a probability distribution over words”). Doing so may improve topic detection for shorter samples over other natural language processing techniques (Tian, pg. 1877, section 1: “Hence, traditional approaches for TDT will not work well when applied to text messages. Several research works [3] [4] also focus on identifying the events hidden in personal and social stream objects, such as digital photo collections and social media sites contents (e.g., Flickr, YouTube, and Facebook). For its shortness and sparseness, these methods are not suitable for text messages either”). Regarding claim 13, all claim limitations are met and rendered obvious by Zhang in view of Gong, Dai, and Tian because the method steps of claim 4 are the same as claim 13. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Gong, Dai, and Kee et al. (U.S. Patent No. 2019/0028605 A1), hereinafter Kee. Regarding claim 6 (dependent on claim 5), Zhang in view of Gong and Dai fails to teach wherein the determining coordinates of the portion of the image includes applying optical character recognition (OCR) to the image. However, Kee teaches a similar system (Kee, abstract: “During operation an image will be analyzed to determine any annotation existing within the image. When annotation exists, the annotated portion is cropped and displayed as the preview (thumbnail) within, for example, a messaging application”), wherein the determining coordinates of the portion of the image includes applying optical character recognition (OCR) to the image (Kee, para 20-21: “There are multiple ways that annotation may be detected within an image. The following are some examples that are not meant to limit the broader invention. Optical Character Recognition—OCR (optical character recognition) is the recognition of printed or written text characters by a processor 303”). Kee utilizes OCR to determine the portion of the image that will be centered in the thumbnail (Kee, see Figure 2 and para 27). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have utilized OCR, in the same way as taught by Kee, with the method of Zhang in view of Gong and Dai in order to detect salient regions in the image that contain important text to be included in the thumbnail (Kee, para 10: “More specifically, in FIG. 2, a user has created an image to say “thanks” to team “hackers”. As part of this image, the user has annotated the image with a time and date of a party. This is illustrated in FIG. 2 as annotation 101. Once the image has been sent in a text, the texting application crops the portion of the image containing the text in order to display within the text message. This is illustrated in FIG. 2 as image 201. As is evident, the information about the time and date of the party is still maintained in the cropped image.”). Regarding claim 15, all claim limitations are met and rendered obvious by Zhang in view of Gong, Dai, and Tian because the method steps of claim 6 are the same as claim 15. Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Gong, Dai, and Csurka (U.S. Patent No. 2009/0208118 A1). Regarding claim 7 (dependent on claim 5), Zhang in view of Gong and Dai fails to teach wherein the determining coordinates of the portion of the image includes applying image labeling to the image. However, Csurka teaches a similar system (Csurka, para 27: “A thumbnail image is generated by cropping a generally rectangular subpart of the image which incorporates at least a part of at least one of the identified candidate regions”), wherein the determining coordinates of the portion of the image includes applying image labeling to the image (Csurka, each pixel is classified, para 41: “At S108, using the selected class model(s), one or more class probability maps P (FIG. 3) are generated. The probability map(s) expresses, for each pixel in the source image 28 (or a reduced resolution version thereof) a probability p that the pixel (or an object of which it forms a part) is in that class c. In general, the probability is related to a determined probability that the pixel is part of an associated group of pixels (referred to herein as a blob) which is classified in that class.”; see Figures 3C and 5). Csurka utilizes image labeling to determine the portion of the image that will be centered in the thumbnail (Csurka, see Figure 1 and para 34). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have utilized image labeling, in the same way as taught by Csurka, with the method of Zhang in view of Gong and Dai in order to detect salient regions in the image that contain key objects using a trained object detection model on each pixel (Csurka, para 27: “The thumbnail/image-crop is thus deduced from the relevant class probability map which is built directly from the source image, using previously trained class models. The intelligent thumbnail/image-crop is thus based on understanding the most relevant or most interesting image region according to the context in which the thumbnail is being used”). As taught by Csurka, this is able to be performed at each pixel, increasing the specificity in identifying different regions in the image. Regarding claim 16, all claim limitations are met and rendered obvious by Zhang in view of Gong, Dai, and Csurka because the method steps of claim 7 are the same as claim 16. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Gong, Dai, and Barrus et al. (U.S. Patent No. 6,693,652 B1), hereinafter Barrus. Regarding claim 8 (dependent on claim 1), Zhang in view of Gong and Dai teaches generating a thumbnail image to include the contents of the image associated with the topic of the messaging session (see claim 1 rejection), but fails to teach wherein the generating the thumbnail image includes modifying an existing thumbnail image to include the contents of the image associated with the topic of the messaging session (emphasis added). However, Barrus teaches a method for generating thumbnail images (Barrus, abstract: “A multimedia message system automatically generates visual representations (thumbnails) of message or media objects”), wherein the generating the thumbnail image includes modifying an existing thumbnail image (Barrus, col 19, ln 12-29: “The dynamic updating module 818 controls the updating of any thumbnails automatically upon modification of an existing message by any user… the dynamic updating module 818 determines other instances where the object or message is displayed as a thumbnail image, and then creates a new thumbnail image and updates all objects that have an outdated image”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the method of modifying an existing thumbnail image, taught by Barrus, with the method of Zhang in view of Gong and Dai in order to ensure that the thumbnail image accurately represents the most recent relevant topic in the messaging session (Barrus, col 19, ln 29-33: “In this manner, the present invention ensures that the thumbnail images are an accurate reflection of the current state of a message or object and provide an invaluable source of information to the users of the system.”). Regarding claim 17, all claim limitations are met and rendered obvious by Zhang in view of Gong, Dai, and Barrus because the method steps of claim 8 are the same as claim 17. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Suh et al. (cited in the Non-Final Office Action - Suh, B., Ling, H., Bederson, B. B., & Jacobs, D. W. (2003, November). Automatic thumbnail cropping and its effectiveness. In Proceedings of the 16th annual ACM symposium on User interface software and technology (pp. 95-104).) teaches a method for generating image thumbnails based on objects in the image (abstract: “Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable.”). Yuan et al. (cited in the Non-Final Office Action - Yuan, Y., Ma, L., & Zhu, W. (2019, October). Sentence specified dynamic video thumbnail generation. In Proceedings of the 27th ACM international conference on multimedia (pp. 2332-2340).) teaches a method for generating thumbnail images based on sentences (abstract: “In this paper, we define a distinctively new task, namely sentence specified dynamic video thumbnail generation, where the generated thumbnails not only provide a concise preview of the original video contents but also dynamically relate to the users’ searching intentions with semantic correspondences to the users’ query sentences”; see also Figure 1). THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA E DRYDEN/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Sep 19, 2022
Application Filed
Aug 29, 2023
Response after Non-Final Action
Sep 21, 2023
Response after Non-Final Action
Oct 14, 2025
Non-Final Rejection — §103
Jan 20, 2026
Response Filed
Feb 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561873
IMAGE PROCESSING APPARATUS AND METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12543950
SLIT LAMP MICROSCOPE, OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC SYSTEM, METHOD OF CONTROLLING SLIT LAMP MICROSCOPE, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12526379
AUTOMATIC IMAGE ORIENTATION VIA ZONE DETECTION
2y 5m to grant Granted Jan 13, 2026
Patent 12340443
METHOD AND APPARATUS FOR ACCELERATED ACQUISITION AND ARTIFACT REDUCTION OF UNDERSAMPLED MRI USING A K-SPACE TRANSFORMER NETWORK
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
83%
With Interview (+25.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month