by TIP News
David Hutton, Chief Engineer at TIP explained it nicely in the ultimate breakout session of day 2, “Open RAN Next Steps: What’s Cooking”. David explained that O-RAN Alliance is a standards body that is involved in creating specifications and doing other standards work while TIP doesn’t do standards and the scope of the organization spans far beyond RAN.
As far as Open RAN is concerned, he explained that TIP is focused on productization. The traditional approach consists of operators speaking to vendors on a bilateral basis to ensure they receive the features and profile for the type of deployment they have. While this can work nicely for incumbent vendors, it becomes tricky with a large number of vendors, some of whom may not have the scale to talk to the large number of operators.
This is where TIP comes into play with their OpenRAN group trying to centralize and combine as much as possible of the baselining of product requirements that an Open RAN vendor needs to support. It also provides a way to do a baseline level of testing against those requirements. This helps operators save time, resources and costs as they do not need to do integration and system level testing. It also helps the vendors to evaluate what they need to prioritize their own roadmap and it eliminates a number of different SKUs (stock keeping units) and product variants for different operators.
This centralization is the key to accelerating Open RAN adoption according to David.
To provide details to the statement above, David shared this slide shown above. He explained that the standards work happen before even the ideate phase. Once the standards has been defined, TIP looks at what features are relevant for the productization needs based on different types of deployment configurations.
This list of operator requirements are then sent to the Open RAN vendors asking them what features are supported today and what others would be supported in 6, 12 and/or 18 months. A response from the vendors helps build an industry roadmap.
Once the product requirements are ready, it is important to define the test and validation requirements, independent of the deployment scenarios operators are looking for. This would ensure that the common test plans would deliver the same results regardless if a product has been tested in Asia, Africa, North America or anywhere else. The badge at the end of testing is a testament to actually testing against the same level of criteria.
The marketplace is the central repository to showcase the systems that has been tested through the TIP process, conforming to the O-RAN specifications.
David went on to explain that the test & validation (T&V) process has been revamped to support a three-tier badging system – bronze, silver, gold – which is based on TIP’s T&V Framework and demonstrates the increasing maturity and interoperability of individual products, product combinations, or solutions.
Bronze badges demonstrate a vendor’s self-assessed compliance with TIP requirements, while silver and gold badges involve increasing levels of testing. The bronze, silver, and gold badge requirements are based on use cases emerging from TIP’s various project groups.
When vendors respond to the list of product requirements they get the bronze badge. This implies that they’re RFI ready and they’re on the TIP Exchange. This allows everybody to see what those results are. From an operator’s perspective, this helps them in terms of issuing RFIs. One way to look at this process is like TIP issuing a common RFI for all of the operators and making those results available.
For silver and gold badges, the project groups also define relevant test requirements and/or test plans. Vendors and operators should review the TIP requirements and test plans associated with particular badges.
A bronze or silver badge does not mean that a particular product, combination, or solution is not suitable for deployment. Rather, operators will typically do more in-house testing before deploying a bronze or silver product, combination, or solution alongside other technologies, or in a particular use case. For example, some Tier 1 operators may continue relying on their own testing of bronze or silver-badged products, while smaller operators may prefer silver or gold-badged solutions.
While the traditional approach where each OEM vendor performs a full RAN system validation has worked well in the past and continues to work in case of monolithic deployments, a more dynamic approach is needed going forward.
David went on to explain that for doing this type of system testing, it is really important to do this based on Continuous Testing methodology. TIP Release Roadmap will regularly define new releases and the contents of the releases is always going to be updated depending on what products can support and what requirements there are.
This makes it important to rely on continuous testing of components and releases, and moving away from Plugfests based approach, once or twice a year. The whole process should also be automated to deliver the same level of testing as with traditional approach to get the results but binding in CI/CD processes, and delivery ultimately into operators staging areas themselves.
Keeping the traditional mindset to do testing for disaggregated networks won’t allow operators to benefit from the testing economy of scale or accelerate deployments of Open RAN networks. TIP Innovation Hub, a federated fulfilment model, aims to solve this problem by “Test once; Deploy many times” approach.
The approach here is to focus down on a number of different blueprints and configurations and test that through a centralized mechanism. TIP’s Innovation Hub is focused on doing this on every single release to validate the system to ensure commercial grade testing. This is the most sensible approach for accelerating commercial deployments of OpenRAN.
Embedded below are videos from Fyuz 2022, day 2 breakout, ‘Open RAN Next Steps: What’s Cooking’.
To learn more about Test & Validation, check out this page.