Prosecution Insights
Last updated: April 19, 2026
Application No. 18/508,878

METHOD AND SYSTEM FOR EDGE CACHING AS A SERVICE

Final Rejection §103
Filed
Nov 14, 2023
Examiner
PARRY, CHRISTOPHER L
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
AT&T Intellectual Property I, L.P.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
72%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
82 granted / 152 resolved
-4.1% vs TC avg
Strong +18% interview lift
Without
With
+17.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
13 currently pending
Career history
165
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
16.4%
-23.6% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 152 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot in view of new ground(s) of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-11, and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seo (US 2021/0112136 A1) in view of Steiner et al. “Steiner” (US 2014/0067898 A1). Regarding Claims 1, Seo discloses teaches a device (300 – fig. 4A), comprising: a processing system including a processor (¶ 0267-0268); and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations (¶ 0267-0268), the operations comprising: identifying data that is associated with an application being executed by an end user device (100 – fig. 4A) (i.e., establishing a communication session between a user terminal and an MEC edge service, wherein the user terminal sends application information to the MEC edge service (effectively informing the MEC edge service of the application data therefore it is then ‘identified’ by the MEC edge service) (¶ 0060-0065) and that has been transmitted from a data center (400 – fig. 4A) in response to a first request by the application (MEC service provides the application and its information by ‘driving’ it into the client (MEC service and its components effectively function as the claimed data center)) (¶ 0090, 0174); determining that the data has been designated for edge caching resulting in a caching determination (wherein an edge application may trigger a “service triggering rule”, including a cache service trigger wherein the associated data is “cached and provided” from the service server for providing low latency of the application data sent/received) (¶ 0117-0119, 0125, and 0088-0090); in response to the caching determination, selecting an edge server from a group of edge servers based on location information of the end user device (i.e., providing terminal location information to the service server within the request; ¶ 0100 & 0125 and wherein the edge cache is location-based for latency purposes (i.e., closer to the client) ¶ 0048-0050, 0003, 0118-0123)); causing the data to be stored at the edge server (i.e., the edge data network 300 may cache data determined based on the movement information of the terminal 100 in the region of interest and the configured cache rule, from the service server 400) (¶ 0119, 0133, 0148-0149); and facilitating providing the data from the edge server to the end user device in response to a second request by the application (i.e., returning pre-stored data by request) (¶ 0133 and 0119-0124). Although Seo discloses a service center 400, Seo fails to explicitly disclose service center 40 is a hyperscaler in a cloud data center and wherein data transmissions from the hyperscaler in the cloud data center are associated with a first cost that is higher than a second cost associated with data transmissions from the edge server, and wherein one or more of the selecting the edge server or the causing the data to be stored at the edge server is based on a determination to reduce or eliminate the first cost for transmission of the data to the end user device. In analogous art, Steiner discloses identifying data (i.e., requested content item) that is associated with an application being executed by an end user device (350 – fig. 3) (i.e., centralized data center 310 is configured to serve requests for content items received from end user devices 350) (¶ 0050) and that has been transmitted from a hyperscaler in a cloud data center (i.e., centralized data center 110 may be any suitable type of data center (e.g., a centralized cloud-based data center) or “hyperscaler”; 310 – fig. 3) in response to a first request by the application (i.e., requests received from end user devices 350) (¶ 0050); determining that the data has been designated for edge caching resulting in a caching determination (i.e., management system 360 is configured to determine the fraction of the content item versions 313 to store in an edge data center 330 based on (1) a popularity distribution of the content item versions 313 accessed from the edge data center 330 and (2) cost model information associated with the centralized data center 310 and the edge data center 330) (¶ 0061 & 0073); causing the data to be stored at the edge server (i.e., The caching of the content item version 313 at the edge data center(s) 330 may be in place of or in addition to caching of the content item version 313 at the centralized data center 310) (¶ 0061 & 0073); and facilitating providing the data from the edge server to the end user device (i.e., if edge data center 330 determines whether content item is locally cached, content is provided to end user device 350) (¶ 0065 & 0067), wherein data transmissions from the hyperscaler (i.e., data center 310) in the cloud data center are associated with a first cost (i.e., storage costs and computing costs at centralized data center 310 are greater) that is higher than a second cost (i.e., costs to provide content from edge data center 330 is less than centralized data center 310) associated with data transmissions from the edge server, and wherein one or more of the selecting the edge server or the causing the data to be stored at the edge server is based on a determination to reduce or eliminate the first cost for transmission of the data to the end user device (i.e., management system 360 is configured to determine content items to store in edge data center based cost model information) (¶ 0057 and 0061). Therefore, it would have been obvious to a person of ordinary skill in the art before the time of invention to modify Seo to include a hyperscaler in a cloud data center and wherein data transmissions from the hyperscaler in the cloud data center are associated with a first cost that is higher than a second cost associated with data transmissions from the edge server, and wherein one or more of the selecting the edge server or the causing the data to be stored at the edge server is based on a determination to reduce or eliminate the first cost for transmission of the data to the end user device as taught by Steiner for the benefit of reducing computational costs and storage capacity (Steiner ¶ 0002). As for Claim 5, Seo and Steiner disclose, in particular Seo teaches causing the data to be deleted from the edge server after expiration of a time period (i.e., cache service may determine a storage period) (¶ 0129). As for Claim 6, Seo and Steiner disclose, in particular Seo teaches selecting a second edge server from the group of edge servers based on second location information of the end user device (determining movement information of the terminal along with location information (¶ 0048-0050, 0003, 0110-0123)); causing the data to be stored at the second edge server (based upon the movement information placing the data at the second edge server in a region of interest (¶ 0110-0118)); and facilitating providing the data from the second edge server to the end user device in response to a third request by the application (based upon the movement of the terminal, and providing the content to from the second cache at the region of interest, providing the content in response to request (¶ 0110-0118, 0133)). As for Claim 7, Seo and Steiner disclose, in particular Seo teaches causing the data to be stored at the second edge server comprises transmitting the data from the edge server to the second edge server (based upon movement information, movement data and service information to the edge cache in the target region of interest (¶ 0110-0118) wherein the service provided is the caching of application data (¶ 0090, 0174)). As for Claim 8, Seo and Steiner disclose, in particular Seo teaches causing the data to be deleted from the edge server in response to the causing the data to be stored at the second edge server (moving services from a first edge to a second in response to terminal movement (¶ 0114-0119) wherein the service includes application data caching (¶ 0090) wherein stored data is stored until expiry of a storage period (¶ 0129-0131) wherein a storage period persists while the service is provided to the client (“the cache service 333 may dynamically determine the storage period based on … the movement information of the terminal 100 requesting the cached data in the region of interest related to the cached data”) and therefore, moving from the first cache to the second edge ends the service providing on the first cache, effectively ending the storage period on the first cache (¶ 0129)). As for Claim 9, Seo and Steiner disclose, in particular Seo teaches predicting that the application will make the third request for the data, wherein the selecting the second edge server and the causing the data to be stored at the second edge server is in response to the predicting of the third request (future requests for data are anticipated and prepared for based upon the movement of the terminal (¶ 0115, 0120-0129)). As for Claim 10, Seo and Steiner disclose, in particular Seo teaches, wherein the operations further comprise: determining that the data has been adjusted resulting in adjusted data (updating the MEC service (¶ 0093) transmitting (e.g., propagating) the changes to the edge application (¶ 0095)) and that the adjusted data has been transmitted from the hyperscaler in response to a third request by the application (MEC service provides the application and its information by ‘driving’ it into the client (MEC service and its components effectively function as the claimed hyperscaler) (¶ 0090, 0174)); determining that the adjusted data has been designated for edge caching resulting in a second caching determination (cache rule includes service updates based upon movement information (¶ 0120-0121) wherein the cache rule is consulted in considering whether to cache data (¶ 0121)); in response to the second caching determination, selecting the edge server from a group of edge servers based on current location information of the end user device (edge services provided based upon terminal location (¶ 0048-0050, 0003, 0118-0123)); causing the adjusted data to be stored at the edge server (¶ 0100-0100, 0048-0050); and facilitating providing the adjusted data from the edge server to the end user device in response to a fourth request by the application (providing the (now updated) cache data in response to terminal/application request (¶ 0133, 0120-0122)) As for Claim 11, Seo and Steiner disclose, in particular Seo teaches, wherein the application is executed at the end user device utilizing the data provided from the edge server (data exchange between the MEC and the terminal drives the terminal application (¶ 0088-0090, 0111-0112)) and utilizing other data provided from the hyperscaler (further includes service availability and subscription information (¶ 0089-0094)). Claim 14 is rejected on the same prior art and grounds as claim 1 for the reasons previously set forth. As for Claim 15, Seo and Steiner disclose, in particular Seo teaches, wherein the data comprises user information corresponding to a user of the end user device (edge services provided based upon terminal location (¶ 0048-0050, 0003, 0118-0123)). As for Claim 16, Seo and Steiner disclose, in particular Seo teaches, wherein the executing of the application utilizes the data provided from the edge server (data exchange between the MEC and the terminal drives the terminal application (¶ 0088-0090, 0111-0112)) and utilizes other data provided from the hyperscaler (further includes service availability and subscription information (¶ 0089-0094)). As for Claim 17, Seo and Steiner disclose, in particular Seo teaches, wherein the operations further comprise: providing a third request for the data associated with the application, wherein the third request is provided when the end user device is at a second location (computing device sends requests for application data (¶ 0060-0061, 0119-0124) edge services provided based upon terminal location and in response to movement (¶ 0048-0050, 0003, 0118-0123)); and receiving the data from a second edge server in response to the third request, wherein the second edge server was selected from the group of edge servers based on the second location of the end user device, and wherein the data was stored at the second edge server (edge services, including the cached data, cached based upon location, provided based upon terminal location (¶ 0100-0101, 0048-0050, 0003, 0118-0123) based upon the movement of the terminal, and providing the content to from the second cache at the region of interest, providing the content in response to request (¶ 0110-0118, 0133)). Claim 18 is rejected on the same prior art and grounds as claim 1 for the reasons previously set forth. As for Claim 19, Seo and Steiner disclose, in particular Seo teaches, wherein the facilitating providing access includes transmitting the data from the network device to the end user device, and wherein the second request is sent by the end user device (¶ 0060-0064, 0088-0090). As for Claim 20, Seo and Steiner disclose, in particular Seo teaches, the method of claim 18, comprising: selecting, by the processing system, a second network device from the group of network devices based on a second location of the end user device (determining movement information of the terminal along with location information (¶ 0048-0050, 0003, 0110-0123)); causing, by the processing system, the data to be stored at the second network device by sending an message to the network device that causes the network device to transmit the data to the second network device (based upon the movement information placing the data at the second edge server in a region of interest (¶ 0110-0118)); and facilitating, by the processing system, providing the data from the second network device to the end user device in response to a third request by the application (based upon the movement of the terminal, and providing the content to from the second cache at the region of interest, providing the content in response to request (¶ 0110-0118, 0133)). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seo in view Steiner as applied to claim 1 above, and further in view of Wei et al. “Wei” (USPN 12,058,206 B1). As for Claim 2, Seo and Steiner fail to explicitly disclose wherein the first cost comprises an access cost for accessing data from the hyperscaler, an egress cost for transmission of data from the hyperscaler, or a combination thereof. In analogous art, Wei discloses wherein the first cost comprises an access cost for accessing data from the hyperscaler, an egress cost for transmission of data from the hyperscaler (i.e., The cost of data moving around (C.sub.data) is calculated based on egress charges by the cloud service provider for the data moving out of the cloud service provider) (Col. 31, lines 10-19), or a combination thereof. Therefore, it would have been obvious to a person of ordinary skill in the art before the time of invention to modify Seo and Steiner to include wherein the first cost comprises an access cost for accessing data from the hyperscaler, an egress cost for transmission of data from the hyperscaler, or a combination thereof as taught by Wei for the benefit of charging a fee for moving data out of the cloud service provider. Claim(s) 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seo in view Steiner as applied to claim 1 above, and further in view of Hartsell et al. “Hartsell” (US 2002/0174227 A1). As for Claim 3, Seo and Steiner disclose, in particular Seo teaches wherein the data is designated for edge caching according to user input via an interface connected with the hyperscaler (a request identifying the data in received from the user terminal from the user by the MEC service (¶ 0048, 0077)). However, Seo and Steiner fail to disclose an application programming interface. In analogous art, Hartsell teaches of Application Programming Interfaces (APIs) used to interface applications and systems (¶ 0247-0252). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Seo and Steiner to include Application Programming Interfaces (APIs) as taught by Hartsell for the benefit of providing precise, secure, auditable, and automatable control over which cloud-originated data should be cached at the edge. As for Claim 4, Seo, Steiner, and Hartsell disclose wherein the determining that the data has been designated for edge caching is based on metadata generated via the application programming interface (specifically subscription request comprises identifying information (Seo ¶ 0096) utilizing APIs for communication (Hartsell ¶ 0247-0252) (inherits motivation to combine from respective parent claim). Claim(s) 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seo in view Steiner as applied to claim 1 above, and further in view of Anand (US 2025/0106446 A1), hereinafter “Anand”. As for Claim 12, Seo and Steiner disclose, in particular Seo teaches facilitating providing the data from the edge server to an end user device in response to a request by an application being executed by the second end user device (Seo ¶ 0114-0119, 0133). However, Seo and Steiner fail to disclose wherein the end user device and the second end user device are associated with a same user. In analogous art, Anand teaches wherein the end user device and the second end user device are associated with a same user (wherein edge caching services include communications which comprise “subscriber” information and identifying information, and data is cached at the edge for access by a plurality of devices based upon subscriber information (¶ 0023, 0026, 0029)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Seo and Steiner to include wherein the end user device and the second end user device are associated with a same user as taught by Anand for the benefit of yielding the predictable result of enabling a user to access a data from any device they own or operate. One would be motivated as such as users often own and utilize a plurality of different devices (e.g., mobile smart phones, desktop computers, laptops, etc…) to perform operations and/or functions. As for Claim 13, Seo and Steiner disclose, in particular Seo teaches selecting a second edge server from the group of edge servers based on a location information of an end user device and causing the data to be stored at the second edge server; and facilitating providing the data from the second edge server to the end user device in response to a request by the application being executed by the end user device (¶ 0100-0100, 0048-0050, 0133). Seo and Steiner fail to disclose wherein the end user device and the second end user device are associated with a same user. In analogous art, Anand wherein the end user device and the second end user device are associated with a same user (wherein edge caching services include communications which comprise “subscriber” information and identifying information, and data is cached at the edge for access by a plurality of devices based upon subscriber information (¶ 0023, 0026, 0029) wherein data is stored at an edge based upon location information of the user (¶ 0031-0033)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Seo and Steiner to include wherein the end user device and the second end user device are associated with a same user as taught by Anand for the benefit of yielding the predictable result of enabling a user to access a data from any device they own or operate. One would be motivated as such as users often own and utilize a plurality of different devices (e.g., mobile smart phones, desktop computers, laptops, etc…) to perform operations and/or functions. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following patents and publications are related to the general state of the art of cost-aware routing. Patel et al. (US 2025/0119375 A1) Sinha (US 2025/0055846 A1) Singhal et al. (USPN 11,929,838 B1) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRIS PARRY whose telephone number is (571)272-8328. The examiner can normally be reached Monday through Thursday 7:00 am to 4:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Colleen Fauz can be reached at 571-272-1667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. CHRIS PARRY Supervisory Patent Examiner Art Unit 2451 /Chris Parry/ Supervisory Patent Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Nov 14, 2023
Application Filed
Apr 24, 2025
Non-Final Rejection — §103
Jul 29, 2025
Response Filed
Feb 02, 2026
Final Rejection — §103
Apr 15, 2026
Examiner Interview Summary
Apr 15, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12556966
METHOD AND APPARATUS FOR TRANSMITTING EMERGENCY BUFFER STATUS REPORT IN WIRELESS COMMUNICATION SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12463915
METHOD AND APPARATUS FOR MANAGING A PACKET RECEIVED AT A SWITCH
2y 5m to grant Granted Nov 04, 2025
Patent 11329867
DISTRIBUTED SYSTEM OF HOME DEVICE CONTROLLERS
2y 5m to grant Granted May 10, 2022
Patent 11115356
EMOJI RECOMMENDATION SYSTEM AND METHOD
2y 5m to grant Granted Sep 07, 2021
Patent 10778690
MANAGING A FLEET OF DEVICES
2y 5m to grant Granted Sep 15, 2020
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
72%
With Interview (+17.7%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 152 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month